Search
Close this search box.

Powering Salesforce with AI: Keep Your Data Secure with the Einstein Trust Layer

In other parts of this series, we’ve covered some of the basics of Data Cloud, Prompt Builder, and Agentforce which ties it all together. As businesses look to adopt these features to enhance productivity and personalize customer interactions, one question often comes to the forefront: Is my data secure? For this final part of our series, we’ll cover Salesforce’s distinguishing feature to address this concern—the Einstein Trust Layer, a robust security framework designed to ensure your data remains private and protected, even when leveraging AI.

The Einstein Trust Layer acts as a secure data gateway between the user and the large language model (LLM) such as OpenAI’s ChatGPT. From masking sensitive information to applying toxicity filters, this trust layer ensures that prompts and responses are handled securely and appropriately, every step of the way. Let’s break down how the Einstein Trust Layer works—first as a prompt passes data from the user to the LLM, and then as the LLM’s response returns through the Trust Layer to the user.

The Prompt — Securing Data from User to LLM

Your data first goes through the Einstein Trust Layer as your users interact with your Agentforce agents. For example, your AI agent may have a prompt template made in Prompt Builder that asks an LLM to summarize your recent interactions with a customer across all your systems. As this prompt is delivered to the LLM, the Einstein Trust Layer applies multiple security measures to ensure that both the request and the data retrieved are securely handled. 

First, using Secure Data Retrieval, it dynamically grounds your prompt with your customer’s data. Next, it applies Data Masking to sensitive fields or Personally Identifiable Information (PII) like name, date of birth, address, or social security. Then after applying some additional guardrails through its Prompt Defense, the Einstein Trust Layer transmits the secure prompt to the LLM. To further enhance this data privacy, any partnerships Salesforce has with LLM providers require a Zero Retention Policy, ensuring that your customer’s data isn’t saved or used for AI training by third-parties.

Let’s walk through each of these steps as the prompt travels through the Einstein Trust Layer from the user to the LLM in greater detail.

1. Dynamic Grounding and Secure Data Retrieval

The first step in this path is dynamic grounding, which enriches the prompt with contextual, relevant data from your Salesforce environment. Dynamic grounding ensures that the prompt becomes more specific and relevant to the task at hand. Instead of simply submitting a vague or generic prompt, the system dynamically incorporates business data such as customer details, order history, or relevant support articles.

This process leverages adaptive placeholders—similar to merge tags—which automatically adjust based on the scenario. For example, if you’re interacting with John Doe, the placeholders dynamically update to pull in John’s information, like his name, address, and purchase history. Dynamic grounding evolves with the interaction, ensuring that real-time data is incorporated as the conversation continues.

As this data is pulled into the prompt, secure data retrieval ensures that it respects your organization’s existing Salesforce permissions and access controls. This guarantees that users only retrieve data they are authorized to see, reinforcing your internal security measures.

2. Data Masking

When a prompt includes sensitive information such as names, addresses, or financial data, the Trust Layer automatically applies data masking. Rather than exposing this sensitive data to the LLM, Salesforce replaces it with placeholder tokens (e.g., “Customer_Name” or “Account_Number”). This masking keeps the actual data protected from the LLM while still providing enough context for the AI to generate a meaningful response.

Once the response is generated, the Trust Layer will reinsert the original data back into the final output for the user to view. This approach ensures privacy while allowing AI to deliver relevant and personalized results.

3. Prompt Defense

As an added security measure to ensure that AI responses remain appropriate and aligned with your business standards, Salesforce also applies Prompt Defense. These are additional guardrails—automatically embedded by Salesforce—that limit the potential for unintended, harmful, or inappropriate outputs. Prompt Defense helps mitigate risks like biased, irrelevant, or offensive responses by including additional parameters and instructions around what the LLM can and cannot return. For example, it might instruct the LLM to not make statements or conclusions that it doesn’t have data or basis in.

Not only does this ensure responses are more accurate and relevant, it also protects against Prompt Injection attacks where bad actors or hackers may try to manipulate the LLM’s output to ignore system guardrails and provide sensitive information that it wouldn’t ordinarily be authorized to provide.

4. Zero Data Retention

After the Einstein Trust Layer takes these measures to secure your prompt and it’s delivered to the LLM, your customers’ data benefits from another distinguishing privacy feature—Zero Data Retention. In all their partnerships with LLM vendors, Salesforce requires a zero retention policy to ensure that any prompt data or response generated by the LLM is not stored or used for model training outside of Salesforce. This guarantees that no sensitive information is held by the LLM provider after the interaction, giving businesses peace of mind that their data is not at risk of being misused or retained unnecessarily.

The Response — Ensuring Safety from LLM to User

Once the LLM generates a response, the Einstein Trust Layer’s job isn’t finished. As the response travels back to the user, several additional steps filter, process, and prepare the response for the user to see.

This starts with Toxicity Detection to flag any harmful or inappropriate content in the response. Next, the response undergoes Data Demasking to repopulate relevant information that was hidden as the prompt was delivered to the LLM. Finally, once the response is presented to the user, they can accept, modify, or reject the response, and provide explicit feedback which is captured in an Audit Trail stored in Data Cloud.

1. Toxicity Detection

Before the response is presented to the user, it undergoes toxicity detection. This is a critical filtering step where Salesforce’s deep learning models review the generated response for harmful or inappropriate content. The Trust Layer scans the response for toxic language, hate speech, violence, sexual content, identifiable personal information, and profanity. It assigns a toxicity score to the response that reflects the probability of it including harmful or inappropriate content. This score is captured and stored in the audit trail along with the original prompt and response.

2. Data Demasking

Once the response has passed through toxicity detection, the Trust Layer takes the final step of data demasking. This is where the placeholders that were used to protect sensitive information earlier (such as “Customer_Name” or “Account_Number”) are replaced with the actual data from your Salesforce instance. This rehydrates the response so that the user sees complete, relevant content without having exposed sensitive data to the LLM.

3. Feedback and Audit Trail

Finally, for businesses using Data Cloud, users can provide feedback on the quality of the response which is captured in an Audit Trail and stored in Data Cloud. This audit trail logs the entire process—from the initial prompt to the LLM’s unfiltered response, toxicity scores, and the final user response. It also includes timestamps and user feedback, ensuring that admins can track and review the quality, security, and appropriateness of each AI-generated response. This feature is particularly useful for compliance and performance monitoring, offering a clear paper trail for every interaction involving the Trust Layer.

Final Thoughts

Ultimately, the Einstein Trust Layer is a core element of Salesforce’s AI-driven workflows, designed to safeguard data at every stage of the process. Through techniques like data masking, toxicity detection, and secure data retrieval, it ensures that sensitive information is protected as it moves through AI models – from the user to the LLM and back.

Looking to take the next step in integrating AI-driven solutions into your workflows? Reach out to Port & Starboard today to see how we can help you harness Salesforce’s GenAI capabilities to drive secure, efficient, and personalized customer experiences!

Gabe Soliman

Gabe is a Certified Salesforce Administrator and compulsive problem-solver that’s passionate about helping clients actualize their full potential through Salesforce. With a background in the nonprofit industry and software development, he loves digging into the how’s and why’s to improve processes, nurture clean data, and help clients better serve their customers and community. Salesforce is like oatmeal. It’s good for you, and if it’s not to your liking, it can always be better customized to what you need.

Browse By
Category

Browse By Category //

Categories

Newest Posts