Salesforce Development Lifecycle and Deployment Architect – Study Guide

deep research Jun 15, 2025

Exam Overview: The Salesforce Development Lifecycle and Deployment Architect certification exam tests advanced knowledge of release management, environments, methodologies, and governance on the Salesforce platform. It consists of 60 multiple-choice questions in 105 minutes, and the passing score is around 68%. The exam covers 8 key topic areas, each weighted by importance. The highest-weighted domains are System Design (≈15%) and Deploying/Building/Releasing/Testing (≈13–14% each), so expect many questions on these. Other areas like Application Lifecycle Management (≈8%) and Risk & Methodology Tools (≈3–5%) have fewer questions but are still crucial to understand. This guide is organized by exam topic, providing clear explanations, practical exercises, and key terms for flashcard review. Use this as a roadmap to focus your study time efficiently and master the most important concepts for the exam.

Application Lifecycle Management (ALM) – 8%

Overview: ALM encompasses the processes and methodologies used to plan, build, test, and release Salesforce applications. A core aspect is choosing the right development methodology (such as Agile vs. Waterfall) to manage project risk and meet customer requirements. Agile methodologies (e.g. Scrum) emphasize short, iterative development cycles with continuous feedback, which suits fast-changing or complex Salesforce projects. In contrast, Waterfall is a linear approach with defined phases (requirements → design → build → test → deploy) and works best when requirements are well-understood up front (often in regulated, low-change environments). In practice, many Salesforce implementations blend these approaches (“Water-Scrum-Fall”), using upfront planning but iterative builds. Be prepared to recommend a development approach based on a scenario’s risk profile: Agile for flexibility and rapid innovation, or Waterfall for strict compliance and predictability.

A successful ALM also requires a release management strategy that coordinates how and when changes move to production. High-performing teams plan regular release windows (for example, biweekly sprints or monthly releases) and ensure robust communication across development, testing, and operations. Key questions include: How will multiple teams coordinate their deployments? How will end-users be trained on updates? An effective strategy might involve release checkpoints, a synchronized calendar, and clear ownership of deployment tasks. Salesforce’s recommended practice is to adopt smaller, frequent releases (continuous delivery) rather than rare “big bang” deployments, as smaller releases reduce risk and allow faster feedback. Equally important is aligning development teams and governance: establish guidelines so that everyone follows the same process. For example, teams should have visibility into each other’s work to avoid redundant or conflicting changes. Strong communication and an agreed-upon governance framework (covered in a later section) ensure that ALM processes run smoothly across the organization.

Strategic Tip: Even though ALM carries a lower weight, it sets the foundation for all other topics. Focus on understanding the pros and cons of Agile vs. Waterfall in a Salesforce context and be ready to identify which fits a given scenario. Also, study how effective release management ties in with sandbox usage, version control, and team coordination – ALM concepts often appear in scenario-based questions that link to other domains.

Practical Exercises (Trailhead & Hands-on):

  • Trailhead – Explore Project Management Methodologies: Complete modules like Explore Project Management Methodologies on Trailhead to solidify the differences between Waterfall and Agile in Salesforce projects.

  • Release Calendar Planning: In your Trailhead Playgrounds or dev org, simulate a release cycle. For example, create a change log and “release calendar” for a fictitious project with two-week sprints. Document the steps for code reviews, user acceptance testing (UAT), and deployment for each sprint.

  • Team Collaboration Simulation: If possible, use a tool like Salesforce DevOps Center or a source control repository (GitHub) with a partner. Practice making changes in parallel and merging them, to experience the importance of team alignment and communication in ALM.

Key Terms and Concepts for Memorization:

  • Agile (Scrum) – Iterative development framework with sprints and frequent feedback loops 

  • Waterfall – Sequential project methodology with defined phases and a single final delivery 

  • Release Management – Strategy for scheduling and coordinating deployments (who, when, how changes reach prod)

  • Continuous Delivery – Practice of keeping code in a deployable state; frequent releases with manual approval 

  • Continuous Deployment – Automated release to production upon passing tests (no manual gate) 

  • Change Set Development vs. Package Development – Legacy org-based development (metadata lives in org) versus source-driven package-based development (metadata in VCS)

  • DevOps Center – Salesforce tool to manage ALM with source control and CI, replacing change sets in modern workflows

  • Governance – Oversight processes and roles ensuring ALM standards are followed (see Governance section for CoE details)

Planning (Environments & Governance) – 13%

Overview: The Planning domain focuses on environment strategy, risk management, and governance before and during development. A fundamental decision is defining an org strategy: should the business use a single Salesforce org or multiple orgs? A single-org strategy centralizes all business units on one platform for consistency and easier global reporting, but can become complex and hit limits as the org grows. A multi-org strategy (multiple production orgs) offers autonomy to business units and can avoid scalability issues (e.g., splitting data or customizations by region or product line), but introduces challenges in integration and coordination. Given a customer’s landscape, evaluate criteria like differing processes, regulatory requirements, and org limitations to recommend the right approach. For example, a company with diverse processes and data residency rules might need multiple orgs, whereas one seeking a “360 view” of the customer might strive for a single org. It’s a balancing act to satisfy business needs while keeping the technology landscape manageable. Be ready to weigh pros and cons of multi-org vs. single-org in scenarios (e.g., mergers, regional divisions).

Equally important is the sandbox environment strategy for development and testing. Salesforce provides sandbox types (Developer, Developer Pro, Partial Copy, Full) each suited for specific uses. When planning, map out which sandbox each phase of the release will use: for example, developers work in Developer sandboxes, integration testing happens in a shared Partial or Full sandbox, UAT in a Full sandbox, and a staging sandbox mimics production for final testing and training. A good plan might allocate multiple Developer sandboxes for parallel project streams, a Full sandbox for performance testing and staging, and perhaps a separate hotfix sandbox reserved for emergency bug fixes. You should be able to apply a sandbox strategy to a release plan, ensuring that concurrent work streams have isolated development orgs and that there is a clear path (e.g., Dev → QA → UAT → Prod). Understand the refresh limitations of each sandbox type and how to schedule them. For example, Developer sandboxes can refresh daily (useful for iterative development), while Full sandboxes refresh ~29 days (used sparingly for final validation with production data). A clever sandbox strategy maximizes parallel development while minimizing integration conflicts.

Another Planning aspect is risk identification and mitigation for the customer’s environment. Environment risks include things like: collisions between multiple teams’ changes, data or metadata inconsistencies across orgs, hitting Salesforce limits, and the impact of Salesforce’s own platform releases. A key best practice is to use source control (a VCS like Git) as a single source of truth to reduce the risk of overwriting work – branching and merging strategies help multiple developers work simultaneously without stepping on each other. Also, maintaining data quality and representative test data in sandboxes mitigates the risk of bugs only showing up in production. For instance, if your sandboxes have poor or stale test data, you might miss issues; using Partial/Full sandboxes or seeding test data helps catch problems early. Another common risk is deploying large changes all at once – this can be mitigated by feature flagging or phased rollouts. The exam may give a scenario (e.g. tight timeline, multiple teams) and ask how to minimize risk: you might answer with strategies like frequent integration testing, code reviews, automated regression tests, and backup/rollback plans. Always articulate an appropriate mitigation (such as “enable changes in a sandbox preview and run full regression tests before a major Salesforce seasonal release” to mitigate new release risk).

Governance framework is the final pillar in Planning. Governance ensures all these moving parts (multiple orgs, many sandboxes, many teams) stay aligned with business objectives and compliance. Often implemented via a Center of Excellence (CoE), governance provides structured oversight. A CoE is a cross-functional team (admins, architects, dev leads, business stakeholders) chartered to enforce standards, manage the backlog of changes, and approve designs. For example, a governance framework might require any proposed change to be reviewed by an Architecture Review Board or to follow a change management process with defined steps. In the exam, given a scenario, you should recommend a governance model – perhaps establishing a steering committee for a large enterprise, or adopting a tiered governance (executive sponsor, design authority, working group) for complex multi-project environments. Key elements to mention include executive buy-in, clear roles and responsibilities within the governance team, regular communication, and documentation of standards. Governance also covers release governance – e.g., deciding on a global release calendar if multiple orgs, ensuring security/compliance reviews are done, and having escalation paths for conflicts. A strong governance process helps avoid the “Wild West” of unmanaged changes and ensures long-term org health.

Strategic Tip: Expect scenario questions that blend these topics – for instance, choosing an org and sandbox strategy for a company (test your reasoning on multi vs single org and environment planning), or recommending how to handle a Salesforce seasonal release. Use elimination: if a choice undermines governance or skips a testing environment, it’s likely incorrect. Emphasize answers that include planning for Salesforce seasonal releases (e.g. use sandbox preview, read release notes, run Apex tests during Salesforce’s pre-release window) to show risk mitigation.

Practical Exercises:

  • Org Strategy Case Study: Create a two-column list for a hypothetical enterprise: in one column, list indicators for a single-org approach (e.g. centralized processes, need for global data sharing), and in the other, indicators for multi-org (e.g. distinct business units with unique processes, risk of hitting limits). For each indicator, write a one-sentence rationale. This helps internalize how to evaluate org strategy.

  • Sandbox Mapping Exercise: Draw a diagram of a deployment pipeline for a sample project. Label each environment (Dev Sandbox, Integration Sandbox, UAT, Staging, Prod) and write the purpose of each (unit testing, integration testing, user training, etc.). This visual mapping reinforces how sandboxes map to the release plan.

  • Governance Charter Draft: Write a short “Governance Charter” for a Salesforce CoE. Include roles (e.g. Exec Sponsor, Lead Architect, Release Manager), meeting cadence, and a few example policies (e.g. “All production changes must be demoed in UAT to business owners before go-live”). Use Salesforce’s CoE guides for inspiration. This exercise makes governance concrete and memorable.

Key Terms and Concepts for Memorization:

  • Single Org vs. Multi-Org – Single org = one Salesforce instance for all teams; Multi-org = multiple prod orgs for different units. Know pros/cons: single org offers unified data but can become complex; multi-org offers autonomy and avoids org limits but adds integration overhead.

  • Sandbox Types – Developer, Developer Pro, Partial Copy, Full; differ in data copy and refresh interval.

  • Sandbox Strategy – Plan assigning sandbox environments to dev, QA, UAT, training, hotfix, etc., including parallel development streams and refresh scheduling.

  • Salesforce Release (Seasonal) – thrice-yearly Salesforce upgrades (Spring, Summer, Winter). Mitigation: use sandbox preview to test against the new release, read release notes, and plan change freeze if needed.

  • Source Control & Branching – Using Git or similar to manage metadata changes. Branching strategies (feature branches, dev branch, main branch) isolate work and reduce risk of conflicts.

  • Center of Excellence (CoE) – Governance body establishing Salesforce best practices, standards, and design oversight. Ensures people, process, and technology are aligned (often formalized as a governance framework).

  • Change Management – Formal process for evaluating and approving changes (could involve change advisory board, documented deployment steps, etc.).

  • Risk Mitigation – Actions like code review, automated testing, backup plans, and phased rollouts to minimize deployment risk. For example, use feature flags to turn off new features if issues arise.

System Design (Architecture & Deployment Design) – 15%

Overview: This domain covers the architectural design of the development lifecycle, including tools and techniques to support an agile, scalable process. One focus is on leveraging Agile tools and practices to support development. Using dedicated agile project management tools (like Jira, Trello, or Salesforce’s Agile Accelerator) can greatly enhance team collaboration and transparency. Such tools allow teams to maintain a prioritized backlog of user stories, plan sprints, and track progress visibly. The advantage is better alignment and adaptability: short sprints let teams deliver value quickly and adjust to changing requirements, which aligns with DevOps principles of rapid, high-quality releases. In practice, an agile tool enforces discipline – every change is tied to a story, and the status is known to all – reducing chaos in large Salesforce projects. The exam may not quiz specific software, but you should recognize that “agile tools” improve communication, encourage continuous improvement, and help respond swiftly to new demands (versus managing projects via spreadsheets or email, which is error-prone).

Next, org strategy considerations appear again here, but from a technical design angle. Given a customer’s requirements, you must evaluate business and technical factors to support the defined org strategy. This means once an org model (single vs multi) is chosen, design the dev processes accordingly. For instance, in a multi-org environment, you may need separate development pipelines for each org and a way to propagate shared components across orgs. A best practice for multi-org is modularizing common functionality into packages that can be deployed to all orgs, to avoid divergence. Recognize challenges like coordinating releases across multiple orgs – if not handled, orgs can get out of sync quickly, with different release windows and inconsistent features. An example scenario: a company has a central CRM org and a separate org for APAC region – how do you manage deployments? You might suggest using a version control system with branches per org or a managed package to roll out common updates. In a single-org scenario, focus on designing an efficient environment strategy within that org (multiple sandboxes, etc., which overlaps with earlier topics). The key is to connect requirements to org architecture: e.g., high complexity or regulatory segregation => multi-org; need for unified customer view => single-org.

System Design also involves defining an environment strategy (sandbox strategy) in technical detail. This was discussed in Planning, but here think of designing the flow: how code moves from dev to staging. You may be asked, for example, how to set up environments for multiple concurrent projects. A good design might dedicate separate dev sandboxes per project stream, a common integration sandbox where all changes are merged and tested, and a staging sandbox that’s a Full copy for final regression and user testing. Also consider special environments like scratch orgs (ephemeral orgs from Salesforce DX) for development. Scratch orgs enable source-driven development and can be created and destroyed quickly, fitting well into CI pipelines. If the exam scenario mentions Salesforce DX or package-based development, recommending scratch orgs for each feature branch is a likely answer. Know that scratch orgs require a Dev Hub and are often used with unlocked packages.

Another objective is to compare and recommend deployment tools and components for a successful deployment strategy. Salesforce offers multiple deployment approaches: Change Sets, Metadata API (ANT/SFDX CLI), and packages (managed/unmanaged/unlocked). You should be comfortable contrasting these:

  • Change Sets: Easiest, point-and-click in Salesforce UI, but only work between connected orgs (e.g. sandbox to production) and can be tedious for large volumes of components. Great for small admin-driven updates, but not scalable for big projects or multi-org (no support for deploying to unrelated orgs). No support for deleting components or automation.

  • Metadata API (e.g. ANT, SFDX): Scriptable deployments via command-line or CI tools. More efficient for large deployments (you can deploy hundreds of components defined in package.xml at once, rather than manually selecting each as in change sets). Supports destructive changes (deletions) and can deploy to any org with credentials. Requires more technical skill and version control integration, but enables CI/CD and repeatability.

  • Managed Packages: Typically used by ISVs for AppExchange apps (with a namespace, IP protection, upgrade capability). Managed packages are versioned and upgradable in subscriber orgs, but components are locked – customers cannot modify packaged components easily. These are less common for internal deployments unless the business has multiple orgs and decides to “package” its common components.

  • Unlocked Packages: Introduced for enterprise development (second-generation packaging). They allow packaging of metadata into modular, versioned units with the flexibility that admins can still tweak them in the org (unlike managed). Unlocked packages require source-driven development (using Salesforce DX and CLI) and are great for organizing a large org’s metadata into logical components (e.g., Sales app vs. Service app in separate packages). They support dependencies and allow continuous integration builds, making them ideal for internal DevOps.

  • Unmanaged Packages: Simple containers for distributing metadata (e.g., one-time drop of code); not upgradable, essentially just a snapshot. Useful for temporary or sample deployments but not for long-term versioning.

When recommending a deployment strategy, consider the scenario’s needs: If the customer has a mature DevOps setup, using source control + CI with Metadata API or unlocked packages is best for automation and rollback. If the customer is a small team with minimal DevOps, Change Sets might suffice for simplicity. The exam may give you a deployment scenario – e.g., “200 custom fields to deploy across multiple orgs” – and the correct recommendation would be to use a scripted metadata deployment (ANT/SFDX) instead of a change set, due to scale. Or if asked how to deploy consistently to 5 orgs, an unmanaged package or unlocked package could be an answer (since change sets cannot deploy to unrelated orgs).

Strategic Tip: System Design is the heaviest-weighted section, so expect in-depth scenario questions. Be comfortable explaining why a particular tool or approach fits a scenario. A common pitfall is to rely on change sets for everything – show awareness of better tools for large or multi-org deployments. Also, mention modern best practices (Salesforce DX, scratch orgs, CI) when appropriate, as the exam values up-to-date knowledge. Think like an architect: the goal is repeatable, reliable, and scalable deployments.

Practical Exercises:

  • Tool Comparison Table: Make a quick reference table listing Change Sets, ANT/Metadata API, Unlocked Packages, and Managed Packages. For each, note key features, use cases, and limitations (e.g., “Change Set – easy UI, no external tools; cons: only related orgs, no deletions”). This solidifies your understanding of when to use each.

  • Build an Unlocked Package: If you have Dev Hub access (you can enable it in a Trailhead Playground), try creating an unlocked package containing a few custom fields or a custom object. Follow a Trailhead module like Unlocked Packages for Customers. This hands-on experience will help you remember package concepts (namespaces, versions, flexibility).

  • CI/CD Simulation: Use a free DevOps tool or even a simple script to simulate continuous integration. For example, use Salesforce CLI (sfdx force:source:deploy) to deploy a component from one sandbox to another and run tests. Observe how you can automate deployments. If Trailhead CI badges (e.g., Continuous Integration using GitHub Actions) are available, do one to reinforce how source control and automated tests fit together.

Key Terms and Concepts for Memorization:

  • Agile Project Tools – e.g. Jira, Azure Boards. They support sprint planning, story tracking, and team collaboration; their use leads to higher transparency and faster adaptation.

  • Scratch Org – Temporary org created via Salesforce DX, used for development and testing of specific features. Emphasize that scratch orgs enable source-driven workflows and parallel dev.

  • Change Set – Salesforce native deployment container. Limitations: Only between connected orgs, manual component selection, no version control, cannot move all metadata types or do deletions.

  • Metadata API (ANT/SFDX) – API for retrieving and deploying metadata in XML form. Used by ANT Migration Tool, Salesforce CLI. Key points: scriptable, supports CI, handles large deployments and deletions (via destructive changes).

  • Managed Package – 1st-gen package with namespace, primarily for ISV distribution. Components are locked (no edit in subscriber org), upgradable with version numbers. Often requires Security Review for AppExchange.

  • Unlocked Package – 2nd-gen package for enterprise. Upgradable and supports versioning, but components are not locked (admins can modify in org). Requires source control and CLI. Great for internal modular development.

  • Org-Based vs. Package-Based Development – Org-based (a.k.a. change-set development) means the source of truth is the org’s metadata; package-based means source of truth is version control and you deploy via packages. The latter is more modern and enables true DevOps.

  • Branching Strategy – e.g. Git flow, feature branching, etc. In context, know that branching allows multiple streams of work. Example terms: feature branch, develop branch, master/main, release branch. Ensure you link branching approach to the need (frequent integration to avoid big bang merges).

  • Deployment Artifacts – Reusable build outputs like package .zip files or versioned packages. In an ideal strategy, every release is a versioned artifact that can be rolled back if necessary.

  • “Deployment Fish” – Fun term referencing the odd shapes in change set status graphs when extra components get pulled in unexpectedly (indicates unpredictability of change sets).

Building (Development & Testing Readiness) – 14%

Overview: The Building domain zooms into day-to-day development practices: version control, testing practices, and ensuring code quality. A critical concept is source control management and the use of a proper branching/versioning strategy during development. Modern Salesforce teams treat the version control repository as the “source of truth” for all metadata, moving away from making uncontrolled changes directly in orgs. This allows multiple developers to work concurrently and integrate their code changes continuously. You should understand branching models like feature branching (each work item in its own branch), development/integration branch (for merging features and testing), and main/master branch (production-ready code). For example, a feature branch -> pull request -> merge to integration -> test -> merge to main workflow is common. The exam may ask how branching and merging can be used to support parallel development or hotfixes. In a scenario, you might recommend using branches (and maybe forking strategies) to isolate a hotfix from ongoing development, then merge the hotfix back into the main line once done. The key is to articulate that source control enables trackable, auditable changes and helps avoid the “it works in my org” problem – if something is in the repo and properly merged with others’ work, you reduce surprises. In fact, an org-based dev approach (no VCS) is likened to copying files between random machines and hoping nothing breaks. A source-driven approach yields reliable, consistent deployment artifacts and is considered best practice.

In the development phase, ensuring code quality is paramount. The exam expects knowledge of methods to deliver quality code: coding standards (naming conventions, avoiding anti-patterns), code reviews (peer reviews or pull request reviews to catch issues), and static code analysis tools. Salesforce developers often use tools like PMD or SonarQube to automatically scan Apex/code for bugs, security issues, or style violations. For example, a static analysis might warn about SOQL inside loops or unused variables. Pull requests combined with automated checks enforce these standards before code is merged. Also remember that Salesforce has built-in guardrails like requiring 75% test coverage for Apex deployments (discussed below), which indirectly forces some level of code testing quality. If a question asks “how to ensure quality in the delivery of code,” an ideal answer touches on code review practices, use of static analysis, enforcing design patterns, and proper testing (unit tests, integration tests). For instance, implementing a rule that every Git pull request must be reviewed by a Tech Lead and pass PMD checks would be a strong answer.

The testing approach and test data strategy also fall under Building. Good developers create robust Apex unit tests to validate their code and to meet deployment requirements. Remember Salesforce’s rule: at least 75% of Apex code must be covered by tests and all tests must pass to deploy to production. But beyond just coverage, the exam is interested in your understanding of test methodology: you should write tests for positive cases (expected behavior), negative cases (handling bad data or errors), permission-based cases (users with different profiles), and large data volume scenarios. This ensures code works under all conditions. A unified test data strategy means using consistent, representative data sets across different test levels (unit, integration, UAT) without exposing sensitive info. For example, use a sandbox seeding or data masking tool to create realistic test data in UAT that mirrors production (so that tests in UAT truly reflect prod behavior). In Apex unit tests, best practices include not relying on existing org data (create your own test records), using Test.startTest()/Test.stopTest() properly, and testing bulk operations. A likely exam point: Given a scenario of a testing requirement, recommend how to design test classes or test data. You might answer: “Use a test data factory to create required Accounts/Contacts so each test runs with known data, ensuring independence from org data and covering relevant use cases.” Also, if a scenario involves, say, a new Salesforce release coming, you would recommend a full regression test in a preview sandbox to catch any issues (as part of testing methodology over the lifecycle).

Finally, development models tie into Building: Org-based vs. Package-based development. Org-based (also called change-set development) we described earlier – you build directly in a sandbox and treat that org as source of truth. Package-based (enabled by unlocked packages and scratch orgs) means you build in modular packages with everything tracked in Git. The exam could ask about the appropriate development environment: for example, “Given a customer scenario with an experienced team and need for CI, should they use scratch orgs or developer sandboxes?” The answer would lean towards scratch orgs and unlocked packages for an advanced, source-driven team. In contrast, a less mature team might stick to developer sandboxes and an org-based approach initially. Also, consider developer sandboxes vs. scratch orgs – scratch orgs are great for fully automated workflows and ephemeral testing, but Developer sandboxes persist longer and contain more org configuration (useful for config work by admins). Showing you know the difference will earn points.

Strategic Tip: When answering questions in this domain, use technical keywords: mention things like “Git,” “pull request,” “code coverage,” “system assert,” “data masking,” etc. This signals familiarity with real-world dev practices. Many options may sound plausible; choose the one aligned with Salesforce best practices (e.g., never propose editing code directly in production, always prefer using a VCS and sandbox). Also remember, this domain overlaps with Testing (next section) – ensure you don’t confuse where to talk about unit vs. UAT. In Building, focus on the developer’s perspective: writing good code and tests, using the right tools to manage code.

Practical Exercises:

  • Git Practice: If you haven’t already, set up a simple Git repository for a Salesforce project (you can use Salesforce CLI to retrieve some metadata from a dev org into source format). Practice creating a feature branch, making a small change (like editing a validation rule in the metadata files), and merging it back. This hands-on will help you remember branching/merging mechanics.

  • Code Review Simulation: Find an example of a poorly written Apex class (you can intentionally write one with common mistakes, like SOQL inside a loop). Perform a “code review” by listing out issues and suggesting improvements (e.g., bulkify the trigger, add null checks, etc.). This mirrors how you would ensure quality code delivery.

  • Write Apex Tests: In a Developer Sandbox or Trailhead Playground, write a simple Apex class (for example, a class that converts temperatures or calculates discounts) and then write a test class for it. Include at least one positive test, one negative test (e.g., expect an exception), and one bulk test (calling the method on 200 records). Aim for >90% coverage. Running these tests will reinforce concepts like System.assertEquals and test data setup. Compare your code against Salesforce’s best practices (Trailhead module Apex Testing can guide you).

Key Terms and Concepts for Memorization:

  • Version Control (Git) – System to track changes in code. Enables collaboration and rollback. Key branch types: feature branch, develop/integration branch, master (main) branch. Understand merge vs. rebase at a basic level (not deeply tested, but concept of merging code is).

  • Branching Strategy – e.g., Git Flow, which uses feature branches, a develop branch for integration, and release/hotfix branches. Know that branching strategy should match team size and release cadence (e.g., small team might use a simple trunk-based strategy vs. large team using Git Flow).

  • Pull Request – A mechanism in Git for a developer to notify others about changes they want to merge. This is where code reviews happen. Often integrated with CI (running tests on PR).

  • Static Code Analysis – Tools that analyze code for potential errors or style violations without executing it. In Salesforce context: PMD, CodeScan, Clayton. Example rule: Avoid DML inside loops.

  • 75% Code Coverage – Deployment rule: at least 75% of Apex lines covered by tests, all tests passing. Each trigger must have some coverage. Aim higher (90%+) in practice, but 75% is the minimum.

  • Test Data Factory – A class or pattern to create test records consistently. Ensures each test method has the data it needs and reduces duplicate code in tests.

  • Positive vs. Negative Tests – Positive test = verifies code works with expected inputs; Negative test = ensures code handles errors (e.g., pass invalid data or user without permission and assert it fails gracefully).

  • Bulk Testing – Testing code with large volumes (100+ records) to ensure it’s bulkified (especially triggers and batch classes).

  • Integration Testing – Testing how different modules or systems work together (e.g., test a whole process across objects, or an external integration’s end-to-end behavior). Usually done in an org with more data.

  • Data Masking – Replacing sensitive data (like emails, names) with fake but realistic data in sandboxes, so that testing is done on safe data. Relevant to a “unified test data strategy” – often achieved with tools or Salesforce Data Mask.

  • Org-Based vs. Scratch Org Development – Org-based uses long-lived sandboxes where config and code are manually built, then retrieved; Scratch org (source-driven) development uses ephemeral orgs and the metadata is pulled from VCS. Recognize that scratch orgs + unlocked packages yield a more agile, modular approach.

Deploying (Deployment Execution and API Considerations) – 14%

Overview: This domain focuses on the technical deployment process, including Salesforce deployment APIs, pre- and post-deployment steps, and handling of configuration data. A key objective is understanding the Metadata API’s capabilities and limitations (and by extension, the Tooling API) for deployments. The Metadata API is Salesforce’s primary mechanism for moving metadata (custom objects, fields, code, etc.) between orgs. It’s robust for migrating full components, but it has some limitations: not all settings are metadata (for example, some org preferences or standard picklist values might not deploy easily), and deployments via Metadata API must run all required tests in the target org (for production deployments). It is asynchronous and can deploy many components at once, with the result being either success or a list of errors if any component fails. The Tooling API, in contrast, is designed for finer-grained operations and for building developer tools (like IDEs). It can retrieve or manipulate individual components (e.g., run a single Apex test, get symbol table of a class) and is used under the hood by Developer Console and IDEs. However, the Tooling API is not typically used to deploy metadata to production – it’s more for editing or debugging during development. For example, you cannot deploy a full metadata package to prod purely with Tooling API calls; you’d use Metadata API for that. The exam may ask to describe when to use one vs the other: you could say “Use Metadata API for migrating configurations or doing CI deployments (supported by tools like ANT or SFDX) because it’s tailored to moving whole components. Use Tooling API for specialized tasks like retrieving code coverage, running tests, or building a custom tool that needs Salesforce code intelligence.” Summarily: Metadata API = deployments and migrations; Tooling API = development assistance (IDE features, live debugging). Recognize also that Tooling has some unique objects (like ApexExecutionOverlayAction for debug) and that not every metadata type is exposed in Tooling. If asked about constraints, mention that some metadata (especially newer features) might initially only be available in Tooling API or not at all via metadata until later – but for the exam, focus on the general roles.

Another aspect is handling pre-deployment and post-deployment steps, especially for items not supported by the APIs. Pre-deployment steps could include manual tasks like “freeze” activities: e.g., temporarily disabling scheduled jobs or workflow rules that might interfere with deployment, or exporting reference data. Post-deployment steps are often needed because certain components don’t activate or deploy in an “active” state. Common examples:

  • After deploying Flows or Process Builder processes, you may need to activate them (the Metadata API can deploy flows as inactive for safety). Similarly, if you deploy a new Assignment Rule or Escalation Rule, you might have to set it as active in the org UI post-deployment.

  • If Profiles or Permission Sets are deployed, you might need a post-step to assign them to users (user assignments are not metadata deployable).

  • Compile Apex: In older times, you might run an Apex compilation job (though Salesforce does this automatically on deploy now). However, things like recompiling cross-object formulas might be needed if certain dependencies changed.

  • Deploying Reference Data: This refers to data that drives app logic but isn’t user transactional data – for example, custom metadata types (which are deployable as metadata), or custom settings data, or CPQ product records. If some “configuration data” isn’t captured in metadata, you need a strategy to migrate it. One approach is using a data loading script or a tool like Data Loader or Excel connector as part of the release process. For instance, if you have a Custom Setting that the app logic depends on, you might export those records from UAT and import into production after deployment.

  • Items explicitly not in Metadata API: historically things like Knowledge base articles, Sales Cloud Einstein configurations, or Analytics assets might require manual setup or separate APIs. While the exam won’t dive into each product’s nuance, you should answer generally: use a combination of automation and manual steps to handle components not covered by metadata deployment.

Expect a scenario like: “You are deploying a large update, which includes a new flow, changes to picklist values, and some data that new functionality needs. What steps would you take pre and post deployment?” A good answer: Before deployment, communicate a freeze to end-users, disable background jobs if necessary, and take a backup of key data. Deploy via Metadata API. After deployment, activate the new flow, verify picklist values in the target org (as picklist value deployments can be tricky), and load any required reference data (perhaps via a CSV import for records needed by the new functionality). Finally, run a post-deployment regression test or smoke test. Use of a continuous integration tool can automate many of these steps except the ones that require manual intervention (like activating a flow, which could also be automated via a metadata deploy of an active version or a Tooling API call). Tools like Gearset or Copado often allow you to specify post-deploy tasks; as an architect, you design the process for these tasks.

A special mention: deploying and managing reference data (technical configuration data) can also be approached with Salesforce features like Custom Metadata Types. Unlike List Custom Settings data, Custom Metadata records are deployable as part of metadata (and they don’t count against data limits) – so an architect might recommend using Custom Metadata Types to store configurable data whenever possible, to ease deployments (since they can be included in a deployment package and even versioned in unlocked packages). If asked how to manage reference data consistently, an answer might include: “Use Custom Metadata Types for any configuration records so they can be migrated with Metadata API. For other data, consider an automated data load in the pipeline or use a tool that supports data deployment along with metadata.”

Strategic Tip: Be ready to name specific examples of pre/post steps – this shows you have practical knowledge. For instance, explicitly mentioning “activate newly deployed flows and assign new permission sets to appropriate users as a post-deployment step” adds credibility. If the question is general, structure your answer as Plan – Deploy – Validate: Plan (pre-step, communications), Deploy (via appropriate API/tool), Validate (post-step testing, activations, data loads, and contingency if something fails). Also, emphasize automation where possible: e.g., “Use a CI tool to run a validated deployment (check only) in a staging org to catch errors before production”.

Practical Exercises:

  • Metadata vs. Tooling Quiz: Write down a list of actions (e.g., “Deploy a new custom object”, “Get code coverage for a class”, “Create a Lightning Web Component”) and self-quiz whether each uses Metadata API, Tooling API, or both. For example: deploying a custom object = Metadata API; retrieving Apex class symbol table = Tooling API. Check your reasoning against Salesforce documentation or developer blogs.

  • Deploy in a Sandbox: Take a component you built (like from the earlier exercises) and deploy it to another sandbox using the Salesforce CLI (sfdx force:source:convert to metadata, then sfdx force:mdapi:deploy). Practice including a Custom Metadata Type record in your deployment (e.g., create a Custom Metadata Type and a record, retrieve and deploy it) to see how reference data can be deployed as metadata.

  • Post-Deployment Checklist: Create a generic checklist of post-deployment tasks for a Salesforce release. For example: “1. Run all tests in Production to verify no failures. 2. Activate deployed Flows/Processes. 3. Ensure new Communities or features are published/enabled. 4. Load XYZ data via Data Loader. 5. Re-enable scheduled jobs.” This exercise will help you recall common tasks quickly on the exam.

Key Terms and Concepts for Memorization:

  • Metadata API – SOAP/REST API for deployments. Key facts: moves metadata in bulk (via packages .zip or changesets behind the scenes); used by ANT, SFDX; requires running tests for prod deploy; supports retrieve, deploy, and destructiveChanges.

  • Tooling API – API for Salesforce dev tooling. Used for: working with smaller pieces (ApexClass, ApexTrigger objects), getting debug logs, running tests synchronously, retrieving Org schema info. Not typically used for full org deployments.

  • Deploy Strategies: Validated Deploy – deploying with test run but not committing (to preview failures); Quick Deploy – Salesforce feature to commit a validated deployment within 10 days without re-running all tests. These might not be directly in the exam, but awareness can help eliminate wrong answers.

  • Pre-Deployment Tasks – Examples: notify users of downtime, freeze changes in source org, disable scheduled jobs or integrations, take backups of data/metadata (e.g., retrieve a metadata backup or export data). Also, if using source control, merge to main and tag the release before deploying.

  • Post-Deployment Tasks – Examples: activate processes/flows, publish Community if applicable, adjust any settings that didn’t migrate (e.g., if Multi-Factor Auth got auto-enabled, etc.), load reference data, run smoke tests, get user acceptance sign-off.

  • Components Not in Metadata API – Know a few examples: User records (and assignments), data records, CRM content, Chatter data, Entitlement templates, etc., often require manual or data deployment steps.

  • Reference Data – Data that configures app logic (e.g., a list of values that drive behavior). Approaches: use Custom Metadata Types (deployable) or include .csv data load as part of release.

  • Custom Settings vs. Custom Metadata – List Custom Settings data is not migrated via metadata (needs data load), whereas Custom Metadata records are treated as metadata and deploy with changesets or packages.

  • Activation – The concept that some components (Flows, Reports, Apps in a profile) need activation or user assignment post deployment. E.g., after deploying a Lightning Page, you might need to activate it for an app and profile combination.

  • Rollback Plan – While Salesforce deployments are not easily “rolled back” automatically, an architect should always consider what to do if deployment fails. Usually this means a back-out strategy (manual or quick fix deployments) or toggling feature visibility off. Knowing that you can back out an installed package by uninstalling (with data loss considerations) or you may have to redeploy the previous metadata to rollback.

Testing (Quality Assurance and Test Planning) – 13%

Overview: The Testing domain ensures you can recommend appropriate testing methodologies and coverage approaches across the lifecycle. Testing methodology refers to the overall approach a project takes to verify functionality – this usually includes unit testing, integration testing, user acceptance testing (UAT), and performance testing. Given a scenario, you should identify the right mix of tests. For example, if a customer is in a highly regulated industry, you might recommend a very thorough testing methodology: unit tests for all code (with > 75% coverage), integration tests in a full sandbox, UAT with business users signing off, and perhaps automated regression tests to catch any bug on existing features. On the other hand, a smaller, Agile-focused team might rely heavily on automated unit tests and smaller iterative UAT cycles.

A likely exam scenario: “Customer X has frequent minor releases. Describe an appropriate testing methodology.” You could answer: Use agile testing practices – write comprehensive Apex unit tests for each change (including positive/negative cases), perform continuous integration tests with each merge (so tests run automatically ensuring nothing breaks), conduct integration testing in a dedicated UAT sandbox every two weeks for business stakeholders to validate, and do exploratory testing on new UX changes. If the scenario involves multiple teams or systems, stress the need for integration testing between Salesforce and external systems (e.g., if Salesforce connects to an ERP, test the end-to-end data flow).

Test execution methodology includes how tests are run and what coverage is needed. On Salesforce, all Apex tests must be executed (and pass) in production deployment. The exam might ask about code coverage requirements: as noted, Salesforce requires at least 75% of Apex code to be covered by tests for deployment. Additionally, each trigger must have some coverage (meaning you can’t have a trigger with 0% coverage even if overall is 75%+). Be ready to state these as facts. But beyond the number, think of test execution strategy: for instance, when doing a major deployment, should you run all tests or a subset? In production deployments, Salesforce runs tests of managed packages in the org by default now (except if you specify to exclude them), so a strategy might involve using the “Run Local Tests” option to run only your org’s tests if appropriate. In a continuous integration pipeline, a good practice is to run all tests on each commit (or at least all tests relevant to changed components) to maintain high quality. The phrase “unified test coverage” could imply making sure that between unit tests and integration tests, all critical requirements are tested (not leaving any functionality unverified).

The exam blueprint also explicitly mentions a unified test data strategy utilizing representative data in a secure manner. This means throughout dev, test, UAT, you should use data that mirrors real production scenarios but without violating privacy or security. In practice: Developers should create test data in Apex tests that reflect real data shapes (if an Account normally has 100 Contacts and related Orders, your test should mimic that scenario rather than trivial data). For UAT, use a Full sandbox with a copy of prod data or a Partial sandbox with a good template, so that testers see realistic records (ensuring, for example, that picklist dependencies or validation rules behave as in prod). Secure manner refers to masking or anonymizing personal data – e.g., use Salesforce Data Mask or manually scramble sensitive fields (like names, emails) in a Full sandbox, so testers aren’t seeing real customer PII. If a scenario is about a healthcare company, an answer should mention using scrubbed data in sandboxes to stay HIPAA compliant while testing.

Automated testing beyond Apex: remember that Apex unit tests cover backend logic, but you may also have UI tests (Selenium or Provar or Salesforce’s own UI test frameworks). An architect might recommend an automation suite for UI regression on critical paths, especially if the customer has many custom Lightning components or a complex Salesforce CPQ setup, etc. This goes into DevOps territory a bit, but since testing is key to DevOps, it’s worth mentioning if relevant: “Implement automated testing for key user flows using a tool like Provar or Selenium, so every release can be validated end-to-end without solely relying on manual testers.”

Consider also performance testing and load testing for Salesforce, if applicable. Full sandboxes can be used for performance/load testing (they contain full data). If the scenario is about a high volume system or perhaps deploying something like Communities or a new Service Cloud implementation expected to have thousands of users, you might mention running performance tests in a Full sandbox (Salesforce even allows requesting performance testing windows).

User Acceptance Testing (UAT) is a major type: ensure you know that UAT is where business users validate the solution against requirements in a sandbox environment that’s close to prod. A good methodology always includes a UAT phase before production deployment, where end users or key stakeholders sign off. If a question asks for a test execution methodology given a customer testing strategy, include UAT as a step unless the scenario explicitly is more dev-focused.

Test coverage in broader sense might also refer to functional test coverage: making sure all user stories or requirements have test cases associated. In agile, one might say “definition of done includes tests written and passing for each story.” So, a unified approach could be mapping requirements to tests (traceability).

Strategic Tip: When recommending a testing approach, always tie it back to risk. High risk or complex changes demand more rigorous testing (full regression, UAT with more users, maybe a pilot in production). Lower risk changes (like text label updates) might just need unit tests and a quick smoke test. So adjust your recommendations to the scenario’s stakes. Also, highlight testing over the Salesforce releases: e.g., “Before each Salesforce seasonal release, run the entire automated test suite in a sandbox on the preview instance to ensure nothing breaks” – this demonstrates proactive risk mitigation and is a best practice for any Salesforce org.

Practical Exercises:

  • Test Plan Creation: Draft a simple test plan for a fictional Salesforce project (e.g., implementing a custom Case management system). Outline phases: Unit Testing (who does it, what tools), Integration Testing (which teams and what is tested – perhaps integration with an email service), UAT (which business users, what scenarios), and Deployment Validation (smoke test in production). Creating this document helps ingrain the overall flow of testing phases.

  • Data Masking Trial: If you have a developer org with data, try out Salesforce’s Data Mask (if available in a sandbox) or simply practice exporting some data and anonymizing it (e.g., replace names with “Test User”, etc.) to simulate how you’d prepare test data for a privacy-sensitive project.

  • Apex Test Challenge on Trailhead: Complete a Trailhead module like Optimize Apex Testing or any Apex testing challenge. For example, there’s often a challenge to write Apex tests achieving certain coverage. This will reinforce writing good test methods (and you can reuse those techniques in memory for exam questions about testing best practices).

Key Terms and Concepts for Memorization:

  • Unit Testing – Testing individual units of code (Apex classes, triggers) in isolation. In Salesforce, done with Apex test methods. Should be thorough: test various inputs, utilize System.assert to verify outcomes.

  • Integration Testing – Testing how different components or systems work together. E.g., testing a end-to-end process like Lead to Opportunity conversion including an external credit check system. Often done in a QA or Full sandbox with proper data.

  • User Acceptance Testing (UAT) – Final testing by end users or product owners to ensure the solution meets business requirements. Usually in a Full sandbox or a UAT sandbox that mimics production closely. Successful UAT is usually a gate to deploy.

  • Regression Testing – Re-running broad test suites to ensure new changes didn’t break existing functionality. Can be manual or automated. Ideally triggered every release (or even every build in CI for automated unit tests).

  • Smoke Testing – A quick, high-level test after deployment to ensure basic functions work (e.g., login, create records, etc.). Verifies that the deployment didn’t fundamentally break the system.

  • Test Coverage – Percentage of code lines executed by tests. Salesforce requires 75% minimum. High coverage is good, but ensure meaningful assertions (don’t write dummy tests just to raise coverage).

  • Representative Data – Test data that closely resembles real production data (in structure and volume) so that tests are valid. E.g., if production has accounts with thousands of contacts, a performance test should simulate that.

  • Data Masking/Anonymization – Process of hiding real personal data in sandbox environments. E.g., replace customer emails with dummy emails in a Full sandbox so testing does not expose real PII.

  • Automated Testing Tools – Know examples: Selenium (for UI), JUnit (for non-SF code), Provar or Autotester for Salesforce UI, etc. Even if not deeply tested, a question might mention “automated testing”, so associating it with Selenium or similar is useful.

  • Performance Testing – Using tools or scripts to simulate high usage (like many concurrent users or large data operations) in a Full sandbox to see if there are any performance bottlenecks (SOQL queries, page load times).

  • Test Levels in DeploymentRun All Tests vs. Run Local Tests vs. Run Specified Tests. Know that “Run Local Tests” runs all except managed package tests, which can save time (this is often used in deployments).

  • Apex Hammer Test – Salesforce’s internal process of running all customer tests on new releases. Not likely on exam, but it’s why sometimes Salesforce finds issues and fixes platform bugs – demonstrates the importance of having good tests, since Salesforce will run them in previews.

Releasing (Release Strategy & Packages) – 13%

Overview: The Releasing domain is about how to deliver changes to users in a governed, strategic way. A major topic here is managed vs. unmanaged vs. unlocked packages – specifically, analyzing use cases and considerations for each. We covered package types under System Design, but let’s summarize from a release perspective:

  • Unmanaged Packages: Primarily used for one-off distribution of metadata (like sharing sample apps or configurations). They are not upgradable – once installed, the components become independent in the target org. Use case: perhaps a consulting partner hands off an unmanaged package of code to a client as a starting point. Consideration: you cannot push updates; upgrading means reinstalling another package or manual changes. Also, no namespace isolation (components from an unmanaged package can conflict with existing ones). Typically not used for long-term enterprise release management because of these limitations.

  • Managed Packages: Ideal for ISV releases (AppExchange apps) and sometimes for internal multi-org product lines. They have a namespace, allowing code isolation and the ability to push upgrades or publish new versions that subscribers can install. Managed packages allow controlled evolution: certain components can be made editable by the subscriber or not. For internal use, some companies with multiple orgs adopt managed packages to deploy a “core” set of functionality to various orgs with version control. Considerations: managed package code is hidden (IP protection) and once released, some components cannot be changed (e.g., you can’t remove a public Apex method in a managed package easily once released). Also, managed packages have a whole lifecycle (beta, released versions) to manage. They require careful versioning strategy. So for the exam, if the scenario is an ISV or a need for modular, versioned releases with IP protection, managed package is the answer.

  • Unlocked Packages: Designed for enterprise release management. Think of them as the way to modularize and version your org’s metadata in a flexible way. Unlocked packages are upgradable (you can install a new version over an old one) and can be used across orgs, but they don’t enforce IP hiding – admins can still tweak things in the installed org if needed. This is great for internal use because it adds agility without the strict lock-down of managed packages. Use cases: dividing a large org’s metadata into packages (maybe a package per project or per department) so that each can be developed and released independently. Considerations: adopting unlocked packages requires using Salesforce DX and source control. Also, you have to manage package dependencies – e.g., if Package A (core) must be installed before Package B (which extends A). The exam might present a scenario like “multiple teams want to release features on different schedules in the same org” – one answer could be to use unlocked packages so each team’s work is a separate package that can be versioned and tested individually, then installed when ready. Additionally, unlocked packages allow continuous integration; you can attach a version number to a set of metadata and promote it through environments (dev -> test -> prod) with consistency. They also facilitate rollback to a prior version if something goes wrong (by reinstalling the older version), though data changes would need separate handling.

When comparing package types, also think of org shapes: Managed packages require a Developer Edition org to create (1st gen) or a namespace for 2nd gen; Unlocked use Dev Hub and scratch orgs. The exam may not dive into creation details but focuses on usage and implications. So memorize a few key differences:

  • Upgradability: Managed = yes, Unlocked = yes, Unmanaged = no.

  • Visibility of code: Managed = hidden (some parts), Unlocked/Unmanaged = visible/editable.

  • Namespace: Managed = has namespace, Unlocked = can have none or namespaced (optional), Unmanaged = no namespace.

  • Preferred for: Managed = AppExchange/multiple customer deployments; Unlocked = internal modular development; Unmanaged = simple share or one-time deploy.

Next, release management strategy in this domain ties back to ALM but specifically on the actual rollout to production. Given a scenario, you must recommend an appropriate strategy to release changes. Consider dimensions like release frequency (e.g. agile continuous releases vs. big scheduled releases), release timing (weekends vs. during hours), and communication. For example, if a customer cannot afford disruption, you might suggest a canary release or phased activation: deploy new features turned off (maybe via a Feature Flag custom setting) and then enable gradually. Or if multiple teams release to one org, consider a Release Manager role to coordinate and bundle changes.

Often the simplest categorization of release strategies is scheduled batch releases (bundling changes into a scheduled deployment, say monthly or quarterly) versus continuous releases (deploying whenever features are ready, potentially many times a week). The exam scenario might hint at the organization’s appetite: a risk-averse enterprise might do quarterly releases with extensive UAT and training (heavyweight but safe), whereas a startup might do rapid continuous delivery with automated tests (lightweight, needs strong CI/CD). Recognize also that Salesforce’s cloud nature means seasonal platform changes are an implicit part of release management – reading release notes and doing impact assessment is part of the strategy. A good release strategy always includes user training or change management: e.g., “Recommend a release strategy that includes a sandbox training environment for end users prior to go-live, and a communication plan (release notes or webinars) for each new deployment.”

Another piece: the exam objective mentions “Apply map sandbox strategy to a specific Release Plan, considering multiple project streams, training, staging, hotfixes.” This essentially merges earlier environment planning with release execution. In other words, in a release plan timeline, you should place environment milestones. For instance: Project Stream A and Stream B develop in parallel in separate dev sandboxes; they merge into an integration sandbox by week 4; UAT in a Full sandbox in week 5; Production deployment in week 6. If a hotfix is needed at week 3 for production, you have a hotfix sandbox (or use one of the dev sandboxes temporarily) and you deploy immediately, then retrofit that fix into the integration line. They want to see that you can coordinate multiple parallel workstreams and still have a coherent release to production without conflicts. Possibly, a question could be: “Multiple projects are ongoing and the organization also needs the ability to deploy emergency fixes. How would you structure the sandbox and release strategy?” An ideal answer: Use separate development sandboxes for each project stream, integrate changes in a common test sandbox. Establish a regular release train (e.g., monthly) for planned deployments. Additionally, reserve a dedicated hotfix sandbox (or use a source control branch labeled ‘hotfix’) for emergency fixes; test hotfixes quickly in a staging sandbox and deploy to prod, then merge those changes back into the main development line. This demonstrates you considered both planned and unplanned releases.

Strategic Tip: Emphasize coordination and documentation in release management. Even though the exam is technical, an architect’s role in release management is to ensure no surprises. That means release notes, backout plans, and proper use of sandbox environments. If a question asks for a release management approach for a scenario, mention things like a Release Calendar, a Change Freeze window before go-live, use of a sandbox preview for training end-users, etc. These practical details can set your answer apart. And always tie the strategy to business needs: e.g., “for a mission-critical system with users worldwide, do deployments on weekends or off-hours and have a rollback strategy (like keeping the previous metadata API deployment package handy to redeploy if needed).”

Practical Exercises:

  • Release Calendar Draft: Take a quarter (3 months) and map out a hypothetical release calendar for an org. Mark Salesforce’s own release dates (e.g., Summer ’25 in June) and decide where your project releases would fit (maybe right after a Salesforce release to include any needed adjustments). This will help you internalize planning around Salesforce’s schedule.

  • Feature Flag Experiment: Implement a simple feature flag in a dev org: e.g., a Custom Setting or Custom Metadata Type that enables/disables a new piece of functionality (like a Lightning Component visibility). Simulate how you could deploy the component turned “off” and later turn it “on” via data change. This solidifies the concept of deploying without immediate activation.

  • Trailhead – Release Management Module: Salesforce has Trailhead content on release readiness and strategies (e.g., Salesforce Release Readiness Strategies). Go through those to pick up any tips on sandbox preview, etc., which are exam-relevant.

Key Terms and Concepts for Memorization:

  • Managed Package – Upgradable package with namespace, typically for distribution outside one org (AppExchange). Key points: can push upgrades, can have license management, code is protected.

  • Unlocked Package – Internal use package, upgradable, no strict IP protection. Key points: great for modular development and CI/CD, requires source-driven approach.

  • Unmanaged Package – One-time metadata bundle, not upgradable. Key: easy to create and install, but no version control – avoid if future updates needed.

  • Org-dependent Unlocked Package – (FYI) a variant of unlocked that can depend on org metadata (advanced topic – likely not in depth on exam, but just know it exists for scenarios where you can’t remove some org-specific stuff).

  • Release Train – A concept from Scaled Agile: regular scheduled releases (like a train leaving the station on schedule). Even if features aren’t ready, the release goes out with what is ready. Encourages discipline.

  • Continuous Delivery vs. Batch Releases – Continuous: deploy small increments frequently (could be daily/weekly) with automation; Batch: accumulate changes for a bigger, less frequent deployment.

  • Change Freeze – A period (often just before a release or before a Salesforce seasonal upgrade) where no new changes are allowed, to stabilize testing.

  • Sandbox Preview – Salesforce normally upgrades sandboxes a few weeks before production. A strategy: have at least one sandbox on the preview instance to test the upcoming release features with your org’s config.

  • Post-Release Monitoring – Although not explicitly in exam guide, mention things like monitoring logs or user feedback immediately after release as part of strategy. E.g., “smoke test in production and monitor error logs or automated monitoring (New Relic, etc.) to catch any issue early.”

  • Hotfix Pipeline – A separate path for emergency fixes. Could be a dedicated sandbox/branch that can be deployed out-of-band. Must integrate back to mainline to avoid code divergence.

  • Communication Plan – Release notes, training sessions, knowledge articles that accompany a release so users know what changed. Good release management always includes this.

Operating (Post-Release Governance & Maintenance) – 10%

Overview: The Operating domain addresses what happens after go-live, including handling changes made directly in production and managing releases in multi-org environments over time. Salesforce admins (especially in smaller orgs) sometimes make urgent changes in the production org – for example, creating a new field or modifying a validation rule on the fly. The exam expects you to understand and explain the implications of making changes directly in production and how to incorporate those back into the formal development lifecycle. Direct prod changes can cause the source of truth to drift – your sandbox or repository no longer matches prod. This can lead to overwritten changes or lost work in the next deployment. It’s generally a bad practice to do significant changes in prod; however, minor tweaks or emergencies do happen.

If a scenario says, “A system administrator often creates reports and minor fields directly in production,” you should explain that these changes need to be captured and propagated. One way: run a metadata comparison (between prod and dev org or using source control diff) to identify differences, then check those into source control or deploy them to sandboxes so everything stays in sync. Another approach is to discourage this habit via governance (have a policy that all changes go through ALM) – but realistically, emergencies occur. So an architect should set up a process: e.g., use the Force.com IDE or SFDX to retrieve the new field metadata from prod and commit it to the repo, then deploy to relevant sandboxes. Also, any direct change bypasses testing and could introduce issues – highlight that risk: making changes in prod means you skip integration testing, which could impact users unexpectedly. So the implication is increased risk and technical debt until those changes are merged into the development pipeline.

For the exam, know the steps to integrate a prod hotfix back into ALM: 1) Document the change, 2) Reproduce it in source (e.g., add the same config in a dev branch or retrieve from prod), 3) Redeploy to other orgs (dev/UAT) if they need that config, 4) Include in next release. This prevents the “hotfix gets overwritten by next deployment” scenario.

Now, multi-org release artifact management is about coordinating deployments across multiple Salesforce orgs (if the company has more than one production org). Imagine a company with separate Sales Cloud and Service Cloud orgs, or separate orgs per region. They might develop some components that are common and should be deployed to all orgs, and some that are org-specific. Approaches:

  • Use a common repository for shared components and perhaps separate repositories for org-specific ones. You might have a core managed package that all orgs install (so you build new features once, then install the package in all orgs).

  • Or maintain multiple branches in version control, one per org, that merge from a common trunk for shared things.

  • Another approach is using a tool like Gearset’s multi-org deployment pipelines to deploy changes to multiple orgs in tandem, ensuring consistency.
    The challenges include keeping orgs from diverging too much (which complicates future updates) and handling different release schedules if, say, one org wants a feature sooner than another.

If asked how to manage releases for multiple orgs, you could say: Establish a baseline package of common functionality that is version-controlled. Automate deployments of that baseline to all orgs so they stay consistent (for shared features). For org-specific changes, maintain separate project streams. Ensure a governance process to evaluate if a new feature goes into all orgs or just one. Also mention the complexity: “more orgs means more release windows… will all orgs be updated at once or separately?” If the scenario suggests independent teams per org, you might allow independent release schedules but with an overarching governance to sync up core features periodically.

In plain terms, release artifact management in multi-org means you have to manage multiple “production versions” of your Salesforce solution. Perhaps you tag releases by org (Release 1.0-NA, 1.0-EU for North America and Europe orgs if they have slight differences). This is an advanced topic, but a safe recommendation is to implement modular packaging: things common to all orgs are developed once (in a package or at least in a central codebase) and deployed everywhere, reducing duplicate effort. Also, highlight the need for tools: using CI/CD to deploy to multiple orgs can reduce manual errors – e.g., a Jenkins pipeline that deploys to Org A, Org B, Org C sequentially.

Strategic Tip: This being a newer and smaller section (10% weight), answers likely tie together with earlier sections. If a question is specifically about multi-org operations, recall content from Planning and Releasing about multi-org pros/cons and strategies. If it’s about direct prod changes, think of real-world issues that cause (lack of documentation, potential to be overwritten, compliance concerns if untracked). Often the best practice answer is “avoid direct prod changes except for emergencies, and even then, back-port those changes into source control ASAP.” The exam wants you to show you can maintain control and quality even when such changes happen.

Practical Exercises:

  • Production Change Log: Simulate a scenario: make a small change in a production-like org (for example, edit a validation rule in a Developer Edition “prod” org). Then go to your metadata repository (or another sandbox) and try to identify that change (using a diff tool or retrieving metadata). Document how you would merge it. This will teach you how a seemingly tiny direct change can be tracked and ensure you remember to mention tools like metadata diff.

  • Multi-Org Diagram: If you have multiple Trailhead Playgrounds, imagine one is “Org A” and one is “Org B”. Create a diagram or list showing how you would deploy a new custom object to both: e.g., using a CI job that deploys to Org A and Org B, or using package installation in both. This helps visualize multi-org deployment flows.

  • Policy Draft: Write a brief policy for an organization addressing “Making Changes in Production”. Include why it’s discouraged, what steps must be taken if it occurs (like documentation and retrofitting), and maybe a permission management tip (some companies remove admin perms in prod to prevent sneaky changes). This solidifies your stance and reasoning, useful for exam phrasing.

Key Terms and Concepts for Memorization:

  • Direct Production Changes – Changes made point-and-click in a live org, outside the normal deployment process. Implications: They bypass testing, can cause inconsistencies with sandboxes, and must be captured to avoid being overwritten.

  • Back-Promotion – The act of taking a production hotfix or change and applying it back to dev environments or source control. Essential to sync environments after a prod-only change.

  • Audit Trail – In Salesforce Setup, the Audit Trail can show config changes (who changed what and when). Useful to identify what was changed directly in prod. Could be part of the solution: regularly review the Audit Trail to catch unauthorized changes.

  • Change Tracking Tools – Some third-party tools (or even Salesforce’s Upcoming DevOps Center) can track differences between orgs. Know that technologies exist to monitor config drift (so you can mention using a tool to monitor prod vs. dev differences).

  • Multi-Org Coordination – Releasing to multiple orgs might require a central release team coordinating deployments to each org, ensuring one org’s changes don’t negatively affect another if they share integrations.

  • Core vs. Context – A term (from Salesforce multi-org guidance) meaning decide which processes are core (common across orgs) and which are context-specific (unique to one org). Core ones maybe reside in a managed package or common repository.

  • Repository Strategy – Single Repo vs Multiple Repos: single monolithic repo for all orgs vs. separate repos per org with maybe a common library. Understand trade-offs (single repo ensures consistency but can be complex; multiple allows flexibility but harder to sync common stuff).

  • Org Sync Frequency – If orgs diverge, plan periodic “synchronization releases” for common functionality. For instance, quarterly ensure all orgs are brought up to latest common baseline.

  • Post-Release Reviews – After each release (especially in multi-org or after prod hotfixes), do a retrospective: what went well, any unexpected prod changes needed, etc., to improve the process. (Good governance practice to mention.)

  • Compliance – Some industries require documenting all changes, even admin tweaks. A direct prod change might break compliance if not documented. Mention that in regulated scenarios, all changes must be tracked in a system of record (like a ticketing system). This ties back to governance.


Final Tips: The Development Lifecycle & Deployment Architect exam is testing your ability to design a robust, scalable development process on Salesforce. Focus on the “why” behind best practices: why use version control? why have a sandbox strategy? why avoid direct prod changes? – often because it reduces risk, improves quality, or supports growth. Time management in the exam is crucial; expect long scenario questions. Use this guide to quickly recall key points and ace those scenarios with confidence, citing best practices and Salesforce’s recommended approaches. Good luck on your certification journey!

Salesforce Saturdays

Join the Salesforce Saturday newsletter. Every Saturday, you'll get 1 actionable tip on Salesforce technology or career growth related to the Salesforce Industry.

We hate SPAM. We will never sell your information, for any reason.