AI Profiles are the control centre for how Affino AI behaves across multiple workflows. They define the model, the response style, the organisation context, the safety and compliance framing, and the feature-specific instructions used by downstream AI surfaces.
AI Profiles became materially more important. They now support GPT-5 model variants, a richer prompts panel, workflow-specific prompt modifiers, tighter configuration security, News Design Element settings, and forum-specific AI response controls.
That means most AI-quality problems should be investigated at profile level before you assume an article screen, a forum, or a search tool is broken.
Use this path when you want to set up an AI Profile that is reliable enough for production testing.
Do not start by changing every modifier you can see. A strong baseline profile is easier to troubleshoot than an aggressively customised one.
Treat AI Profiles as shared operational settings, not casual personal preferences. A profile change can affect multiple customer-facing and editorial workflows at once.
Start with the clearest possible role and context. If the AI does not know what organisation it is representing, what services matter, or what tone and safety boundaries apply, the downstream modifiers have much less to work with.
Change one layer at a time. Adjust the model, the profile context, or one modifier family before you change everything together.
Keep ownership tight. 9.0.11 added stronger security for exactly this reason: uncontrolled AI setting changes create noisy quality problems that are hard to trace.
9.0.11 upgraded the platform AI engine to GPT-5 model variants and separated output detail from the prompt itself.
AI Profiles can now use GPT-5 options tuned for different trade-offs, while reasoning effort defaults to a quality-focused medium level. Verbosity is configured separately with low, medium, and high settings so you can influence answer length and detail without rewriting the whole prompt strategy.
Use a lighter model when speed or cost matters more than depth. Use a more conversational or capable model when the workflow depends on richer answers. Use verbosity to tune how much detail the user should receive before you reach for prompt rewrites.
The prompts panel is where you teach the AI who it is acting for and how it should behave. In 9.0.11 the panel supports fields such as AI Role, Organisation Information, Organisation Objectives, Services Provided, Content Available, Answering Style, Legal Compliance and Safety, and a System Role Modifier.
These fields matter because they become the profile-level shaping layer that downstream workflows inherit. If the AI sounds generic or drifts out of policy, this is one of the first places to review.
Write this layer for clarity, not for cleverness. A simple, concrete description of the organisation and expected behaviour is usually stronger than a long speculative prompt.
AI Profiles do not replace Centralised Prompt Management. They work with it.
In 9.0.11 the backend assembles structured prompts from the master central prompt templates plus the relevant AI Profile context and modifiers. Empty modifier fields are omitted automatically, and older custom prompts are archived for reference rather than remaining active in the main workflow.
Use the central prompt system for the shared baseline across your organisation. Use the AI Profile for local shaping: role, services, safety, content boundaries, and workflow-specific emphasis. If multiple AI surfaces show the same problem, investigate the central prompt family first. If one profile behaves differently from the rest, investigate the profile layer first.
The 9.0.11 AI Profile changes introduced dedicated modifier fields for multiple downstream workflows. These include modifiers for question generation, contextual retrieval, site search, article summaries, sharelines, and forum behaviour.
Use these fields when one workflow needs extra steering without changing the entire profile foundation. For example, you might want article summaries to stay concise while site search stays more explanatory.
This is one of the biggest practical improvements in 9.0.11 because it lets operators tune behaviour with much less risk than hand-editing a single giant prompt for everything.
Live Edit workflows now use Prompt Modifier fields instead of older direct prompt controls. Each field shows the current profile value as a starting point, but the editor can make a one-off adjustment for that generation run without saving the change back to the AI Profile.
This is useful when you need to steer one output without rewriting the long-term default. It is especially helpful for summaries, sharelines, or question-generation tasks where the editorial need is temporary.
Use live-edit overrides for exceptions. Use the AI Profile for defaults. If operators find themselves retyping the same one-off override repeatedly, that is a signal that the profile itself probably needs improvement.
AI Profiles now hold important settings for other major features as well.
For the News Design Element, the AI Profile controls sections, security clearances, prompt instructions, time frames, topics, tooltips, no-news behaviour, and display limits.
For forums, the AI Profile now includes a Forum Prompt Modifier so administrators can shape how AI Auto Response behaves in community contexts. That sits alongside per-forum enablement and Forum Profile styling.
This is why AI Profiles should be reviewed whenever News Design Element or forum AI behaviour looks off. Those features are not only front-end settings; they are profile-driven.
9.0.11 introduced an AI Configuration Security Group setting so you can control who is allowed to change AI-related settings outside the AI Profile itself. This matters because article summary prompts, sharelines settings, and similar controls can alter public-facing behaviour.
Use this to keep governance clear. Decide who owns profile defaults, who is allowed to make local overrides, and who can review AI behaviour in production workflows.
Good governance is not bureaucracy for its own sake. It is what keeps one operator's experimental change from quietly affecting multiple surfaces without review.
When AI output quality shifts, start with a simple diagnostic path.
This order matters because it prevents you from making random edits at the wrong layer. Most AI-quality problems become much easier to solve when you know whether the issue came from the shared prompt, the profile, a workflow-specific modifier, or a one-off override.
Use this checklist before you start rewriting prompts wholesale.
The most useful next reads are the Centralised Prompt Management Guide, the News Design Element Guide, and the Affino AI Guide.
Use this path when you want to set up an AI Profile that is reliable enough for production testing.
Do not start by changing every modifier you can see. A strong baseline profile is easier to troubleshoot than an aggressively customised one.
Treat AI Profiles as shared operational settings, not casual personal preferences. A profile change can affect multiple customer-facing and editorial workflows at once.
Start with the clearest possible role and context. If the AI does not know what organisation it is representing, what services matter, or what tone and safety boundaries apply, the downstream modifiers have much less to work with.
Change one layer at a time. Adjust the model, the profile context, or one modifier family before you change everything together.
Keep ownership tight. added stronger security for exactly this reason: uncontrolled AI setting changes create noisy quality problems that are hard to trace.
upgraded the platform AI engine to GPT-5 model variants and separated output detail from the prompt itself.
AI Profiles can now use GPT-5 options tuned for different trade-offs, while reasoning effort defaults to a quality-focused medium level. Verbosity is configured separately with low, medium, and high settings so you can influence answer length and detail without rewriting the whole prompt strategy.
Use a lighter model when speed or cost matters more than depth. Use a more conversational or capable model when the workflow depends on richer answers. Use verbosity to tune how much detail the user should receive before you reach for prompt rewrites.
The prompts panel is where you teach the AI who it is acting for and how it should behave. The panel supports fields such as AI Role, Organisation Information, Organisation Objectives, Services Provided, Content Available, Answering Style, Legal Compliance and Safety, and a System Role Modifier.
These fields matter because they become the profile-level shaping layer that downstream workflows inherit. If the AI sounds generic or drifts out of policy, this is one of the first places to review.
Write this layer for clarity, not for cleverness. A simple, concrete description of the organisation and expected behaviour is usually stronger than a long speculative prompt.
AI Profiles do not replace Centralised Prompt Management. They work with it.
The backend assembles structured prompts from the master central prompt templates plus the relevant AI Profile context and modifiers. Empty modifier fields are omitted automatically, and older custom prompts are archived for reference rather than remaining active in the main workflow.
Use the central prompt system for the shared baseline across your organisation. Use the AI Profile for local shaping: role, services, safety, content boundaries, and workflow-specific emphasis. If multiple AI surfaces show the same problem, investigate the central prompt family first. If one profile behaves differently from the rest, investigate the profile layer first.
The AI Profile changes introduced dedicated modifier fields for multiple downstream workflows. These include modifiers for question generation, contextual retrieval, site search, article summaries, sharelines, and forum behaviour.
Use these fields when one workflow needs extra steering without changing the entire profile foundation. For example, you might want article summaries to stay concise while site search stays more explanatory.
This is one of the biggest practical improvements because it lets operators tune behaviour with much less risk than hand-editing a single giant prompt for everything.
Live Edit workflows now use Prompt Modifier fields instead of older direct prompt controls. Each field shows the current profile value as a starting point, but the editor can make a one-off adjustment for that generation run without saving the change back to the AI Profile.
This is useful when you need to steer one output without rewriting the long-term default. It is especially helpful for summaries, sharelines, or question-generation tasks where the editorial need is temporary.
Use live-edit overrides for exceptions. Use the AI Profile for defaults. If operators find themselves retyping the same one-off override repeatedly, that is a signal that the profile itself probably needs improvement.
AI Profiles now hold important settings for other major features as well.
For the News Design Element, the AI Profile controls sections, security clearances, prompt instructions, time frames, topics, tooltips, no-news behaviour, and display limits.
For forums, the AI Profile now includes a Forum Prompt Modifier so administrators can shape how AI Auto Response behaves in community contexts. That sits alongside per-forum enablement and Forum Profile styling.
This is why AI Profiles should be reviewed whenever News Design Element or forum AI behaviour looks off. Those features are not only front-end settings; they are profile-driven.
introduced an AI Configuration Security Group setting so you can control who is allowed to change AI-related settings outside the AI Profile itself. This matters because article summary prompts, sharelines settings, and similar controls can alter public-facing behaviour.
Use this to keep governance clear. Decide who owns profile defaults, who is allowed to make local overrides, and who can review AI behaviour in production workflows.
Good governance is not bureaucracy for its own sake. It is what keeps one operator's experimental change from quietly affecting multiple surfaces without review.
When AI output quality shifts, start with a simple diagnostic path.
This order matters because it prevents you from making random edits at the wrong layer. Most AI-quality problems become much easier to solve when you know whether the issue came from the shared prompt, the profile, a workflow-specific modifier, or a one-off override.
Use this checklist before you start rewriting prompts wholesale.
The most useful next reads are the your platform team's centrally-managed prompt set, the News Design Element Guide, and the Affino AI Guide.
Use this path when you want to set up an AI Profile that is reliable enough for production testing.
Do not start by changing every modifier you can see. A strong baseline profile is easier to troubleshoot than an aggressively customised one.
Treat AI Profiles as shared operational settings, not casual personal preferences. A profile change can affect multiple customer-facing and editorial workflows at once.
Start with the clearest possible role and context. If the AI does not know what organisation it is representing, what services matter, or what tone and safety boundaries apply, the downstream modifiers have much less to work with.
Change one layer at a time. Adjust the model, the profile context, or one modifier family before you change everything together.
Keep ownership tight. added stronger security for exactly this reason: uncontrolled AI setting changes create noisy quality problems that are hard to trace.
upgraded the platform AI engine to GPT-5 model variants and separated output detail from the prompt itself.
AI Profiles can now use GPT-5 options tuned for different trade-offs, while reasoning effort defaults to a quality-focused medium level. Verbosity is configured separately with low, medium, and high settings so you can influence answer length and detail without rewriting the whole prompt strategy.
Use a lighter model when speed or cost matters more than depth. Use a more conversational or capable model when the workflow depends on richer answers. Use verbosity to tune how much detail the user should receive before you reach for prompt rewrites.
The prompts panel is where you teach the AI who it is acting for and how it should behave. The panel supports fields such as AI Role, Organisation Information, Organisation Objectives, Services Provided, Content Available, Answering Style, Legal Compliance and Safety, and a System Role Modifier.
These fields matter because they become the profile-level shaping layer that downstream workflows inherit. If the AI sounds generic or drifts out of policy, this is one of the first places to review.
Write this layer for clarity, not for cleverness. A simple, concrete description of the organisation and expected behaviour is usually stronger than a long speculative prompt.
AI Profile modifiers shape behaviour at the section / Affino-Profile layer. They give individual sites a place to tailor tone, scope, or guard-rails without touching anything platform-wide.
Use the modifiers as a focused, lightweight layer:
If a piece of behaviour needs to apply across many sites consistently, that's a product-roadmap conversation rather than a profile change - flag it through your usual support channel.
The AI Profile changes introduced dedicated modifier fields for multiple downstream workflows. These include modifiers for question generation, contextual retrieval, site search, article summaries, sharelines, and forum behaviour.
Use these fields when one workflow needs extra steering without changing the entire profile foundation. For example, you might want article summaries to stay concise while site search stays more explanatory.
This is one of the biggest practical improvements because it lets operators tune behaviour with much less risk than hand-editing a single giant prompt for everything.
Live Edit workflows now use Prompt Modifier fields instead of older direct prompt controls. Each field shows the current profile value as a starting point, but the editor can make a one-off adjustment for that generation run without saving the change back to the AI Profile.
This is useful when you need to steer one output without rewriting the long-term default. It is especially helpful for summaries, sharelines, or question-generation tasks where the editorial need is temporary.
Use live-edit overrides for exceptions. Use the AI Profile for defaults. If operators find themselves retyping the same one-off override repeatedly, that is a signal that the profile itself probably needs improvement.
AI Profiles now hold important settings for other major features as well.
For the News Design Element, the AI Profile controls sections, security clearances, prompt instructions, time frames, topics, tooltips, no-news behaviour, and display limits.
For forums, the AI Profile now includes a Forum Prompt Modifier so administrators can shape how AI Auto Response behaves in community contexts. That sits alongside per-forum enablement and Forum Profile styling.
This is why AI Profiles should be reviewed whenever News Design Element or forum AI behaviour looks off. Those features are not only front-end settings; they are profile-driven.
introduced an AI Configuration Security Group setting so you can control who is allowed to change AI-related settings outside the AI Profile itself. This matters because article summary prompts, sharelines settings, and similar controls can alter public-facing behaviour.
Use this to keep governance clear. Decide who owns profile defaults, who is allowed to make local overrides, and who can review AI behaviour in production workflows.
Good governance is not bureaucracy for its own sake. It is what keeps one operator's experimental change from quietly affecting multiple surfaces without review.
When AI output quality shifts, start with a simple diagnostic path.
This order matters because it prevents you from making random edits at the wrong layer. Most AI-quality problems become much easier to solve when you know whether the issue came from the shared prompt, the profile, a workflow-specific modifier, or a one-off override.
Use this checklist before you start rewriting prompts wholesale.
The most useful next reads are the your platform team's centrally-managed prompt set, the News Design Element Guide, and the Affino AI Guide.
Use this path when you want to set up an AI Profile that is reliable enough for production testing.
Do not start by changing every modifier you can see. A strong baseline profile is easier to troubleshoot than an aggressively customised one.
Treat AI Profiles as shared operational settings, not casual personal preferences. A profile change can affect multiple customer-facing and editorial workflows at once.
Start with the clearest possible role and context. If the AI does not know what organisation it is representing, what services matter, or what tone and safety boundaries apply, the downstream modifiers have much less to work with.
Change one layer at a time. Adjust the model, the profile context, or one modifier family before you change everything together.
Keep ownership tight. added stronger security for exactly this reason: uncontrolled AI setting changes create noisy quality problems that are hard to trace.
The model is the most consequential choice on the AI Profile. AI Profiles can use the GPT-5 model family across the supported variants, plus Google Gemini 2.5 Flash, depending on the workflow.
Use a lighter model when speed or cost matters more than depth. Use a more conversational or capable model when the workflow depends on richer answers. Reasoning effort and verbosity are hardcoded at sensible defaults for this release - if a workflow needs different behaviour, adjust the AI Role or prompt modifier rather than the model setting.
The prompts panel is where you teach the AI who it is acting for and how it should behave. The panel supports fields such as AI Role, Organisation Information, Organisation Objectives, Services Provided, Content Available, Answering Style, Legal Compliance and Safety, and a System Role Modifier.
These fields matter because they become the profile-level shaping layer that downstream workflows inherit. If the AI sounds generic or drifts out of policy, this is one of the first places to review.
Write this layer for clarity, not for cleverness. A simple, concrete description of the organisation and expected behaviour is usually stronger than a long speculative prompt.
AI Profile modifiers shape behaviour at the section / Affino-Profile layer. They give individual sites a place to tailor tone, scope, or guard-rails without touching anything platform-wide.
Use the modifiers as a focused, lightweight layer:
If a piece of behaviour needs to apply across many sites consistently, that's a product-roadmap conversation rather than a profile change - flag it through your usual support channel.
The AI Profile changes introduced dedicated modifier fields for multiple downstream workflows. These include modifiers for question generation, contextual retrieval, site search, article summaries, sharelines, and forum behaviour.
Use these fields when one workflow needs extra steering without changing the entire profile foundation. For example, you might want article summaries to stay concise while site search stays more explanatory.
This is one of the biggest practical improvements because it lets operators tune behaviour with much less risk than hand-editing a single giant prompt for everything.
Live Edit workflows now use Prompt Modifier fields instead of older direct prompt controls. Each field shows the current profile value as a starting point, but the editor can make a one-off adjustment for that generation run without saving the change back to the AI Profile.
This is useful when you need to steer one output without rewriting the long-term default. It is especially helpful for summaries, sharelines, or question-generation tasks where the editorial need is temporary.
Use live-edit overrides for exceptions. Use the AI Profile for defaults. If operators find themselves retyping the same one-off override repeatedly, that is a signal that the profile itself probably needs improvement.
AI Profiles now hold important settings for other major features as well.
For the News Design Element, the AI Profile controls sections, security clearances, prompt instructions, time frames, topics, tooltips, no-news behaviour, and display limits.
This is why AI Profiles should be reviewed whenever News Design Element behaviour looks off. Those features are profile-driven, not just front-end settings.
introduced an AI Configuration Security Group setting so you can control who is allowed to change AI-related settings outside the AI Profile itself. This matters because article summary prompts, sharelines settings, and similar controls can alter public-facing behaviour.
Use this to keep governance clear. Decide who owns profile defaults, who is allowed to make local overrides, and who can review AI behaviour in production workflows.
Good governance is not bureaucracy for its own sake. It is what keeps one operator's experimental change from quietly affecting multiple surfaces without review.
When AI output quality shifts, start with a simple diagnostic path.
This order matters because it prevents you from making random edits at the wrong layer. Most AI-quality problems become much easier to solve when you know whether the issue came from the shared prompt, the profile, a workflow-specific modifier, or a one-off override.
Use this checklist before you start rewriting prompts wholesale.
The most useful next reads are the your platform team's centrally-managed prompt set, the News Design Element Guide, and the Affino AI Guide.
Use this path when you want to set up an AI Profile that is reliable enough for production testing.
Do not start by changing every modifier you can see. A strong baseline profile is easier to troubleshoot than an aggressively customised one.
Treat AI Profiles as shared operational settings, not casual personal preferences. A profile change can affect multiple customer-facing and editorial workflows at once.
Start with the clearest possible role and context. If the AI does not know what organisation it is representing, what services matter, or what tone and safety boundaries apply, the downstream modifiers have much less to work with.
Change one layer at a time. Adjust the model, the profile context, or one modifier family before you change everything together.
Keep ownership tight. added stronger security for exactly this reason: uncontrolled AI setting changes create noisy quality problems that are hard to trace.
The model is the most consequential choice on the AI Profile. AI Profiles can use the GPT-5 model family across the supported variants, plus Google Gemini 2.5 Flash, depending on the workflow.
Use a lighter model when speed or cost matters more than depth. Use a more conversational or capable model when the workflow depends on richer answers. Reasoning effort and verbosity are hardcoded at sensible defaults for this release - if a workflow needs different behaviour, adjust the AI Role or prompt modifier rather than the model setting.
The prompts panel is where you teach the AI who it is acting for and how it should behave. The panel supports fields such as AI Role, Organisation Information, Organisation Objectives, Services Provided, Content Available, Answering Style, Legal Compliance and Safety, and a System Role Modifier.
These fields matter because they become the profile-level shaping layer that downstream workflows inherit. If the AI sounds generic or drifts out of policy, this is one of the first places to review.
Write this layer for clarity, not for cleverness. A simple, concrete description of the organisation and expected behaviour is usually stronger than a long speculative prompt.
AI Profile modifiers shape behaviour at the section / Affino-Profile layer. They give individual sites a place to tailor tone, scope, or guard-rails without touching anything platform-wide.
Use the modifiers as a focused, lightweight layer:
If a piece of behaviour needs to apply across many sites consistently, that's a product-roadmap conversation rather than a profile change - flag it through your usual support channel.
The AI Profile changes introduced dedicated modifier fields for multiple downstream workflows. These include modifiers for question generation, contextual retrieval, site search, article summaries, sharelines, and forum behaviour.
Use these fields when one workflow needs extra steering without changing the entire profile foundation. For example, you might want article summaries to stay concise while site search stays more explanatory.
This is one of the biggest practical improvements because it lets operators tune behaviour with much less risk than hand-editing a single giant prompt for everything.
Live Edit workflows now use Prompt Modifier fields instead of older direct prompt controls. Each field shows the current profile value as a starting point, but the editor can make a one-off adjustment for that generation run without saving the change back to the AI Profile.
This is useful when you need to steer one output without rewriting the long-term default. It is especially helpful for summaries, sharelines, or question-generation tasks where the editorial need is temporary.
Use live-edit overrides for exceptions. Use the AI Profile for defaults. If operators find themselves retyping the same one-off override repeatedly, that is a signal that the profile itself probably needs improvement.
AI Profiles now hold important settings for other major features as well.
For the News Design Element, the AI Profile controls sections, security clearances, prompt instructions, time frames, topics, tooltips, no-news behaviour, and display limits.
This is why AI Profiles should be reviewed whenever News Design Element behaviour looks off. Those features are profile-driven, not just front-end settings.
introduced an AI Configuration Security Group setting so you can control who is allowed to change AI-related settings outside the AI Profile itself. This matters because article summary prompts, sharelines settings, and similar controls can alter public-facing behaviour.
Use this to keep governance clear. Decide who owns profile defaults, who is allowed to make local overrides, and who can review AI behaviour in production workflows.
Good governance is not bureaucracy for its own sake. It is what keeps one operator's experimental change from quietly affecting multiple surfaces without review.
When AI output quality shifts, start with a simple diagnostic path.
This order matters because it prevents you from making random edits at the wrong layer. Most AI-quality problems become much easier to solve when you know whether the issue came from the shared prompt, the profile, a workflow-specific modifier, or a one-off override.
Use this checklist before you start rewriting prompts wholesale.
The most useful next reads are the your platform team's centrally-managed prompt set, the News Design Element Guide, and the Affino AI Guide.
Meetings:
Google Meet and Zoom
Venue:
Soho House, Soho Works +
Registered Office:
55 Bathurst Mews
London, UK
W2 2SB
© Affino 2026