Reviewing Progress Regularly

Explore top LinkedIn content from expert professionals.

  • View profile for Marcus Chan
    Marcus Chan Marcus Chan is an Influencer

    Your reps aren’t broken. Your sales system is. | I help CROs and VPs of Sales at B2B companies fix the system so their team can finally perform | $950M+ in client revenue generated | Ex-Fortune 500 $195M/yr sales exec

    100,500 followers

    I just watched an AE lose a $1.2M deal after running a "successful" product trial that the prospect LOVED. After 8 weeks of work, the CFO killed it with five words: "Let's try our current vendor." After analyzing 200+ enterprise sales cycles at companies including Salesforce, HubSpot, Thomson Reuters, and Workday, I've identified the exact framework that separates 80%+ trial conversion rates from the industry average of 30%. The psychological shift required… Stop treating trials as product demos and start treating them as RISK ELIMINATION EXERCISES. After being promoted 12 times and hitting #1 in every role before leading a 110-person team to $190M+ annually, I've developed a framework that's transformed how top companies run trials. THE 5 POINT TRIAL QUALIFICATION SYSTEM: 1. 𝗣𝗥𝗢𝗕𝗟𝗘𝗠 𝗩𝗔𝗟𝗜𝗗𝗔𝗧𝗜𝗢𝗡 Ask these 3 questions before any trial: → "What happens if you don't solve this in 90 days?" (quantify impact)  → "How have you tried solving this before?" (establishes solution gap)  → "Who else is affected?" (identifies stakeholders) These eliminate 68% of unqualified trials before they start. 2. 𝗦𝗨𝗖𝗖𝗘𝗦𝗦 𝗗𝗘𝗙𝗜𝗡𝗜𝗧𝗜𝗢𝗡 Document these 4 criteria: → Technical requirements (features that must work)  → Business metrics (quantifiable outcomes)  → Timeline requirements (implementation speed)  → User adoption requirements (usage patterns) Get confirmation: "If we demonstrate [criteria], you'd move forward with purchase by [date]. Correct?" 3. 𝗦𝗧𝗔𝗞𝗘𝗛𝗢𝗟𝗗𝗘𝗥 𝗠𝗔𝗣𝗣𝗜𝗡𝗚 Create a "Decision Matrix" for: → Technical buyers (every trial user)  → Economic buyers (CFO/budget holder)  → Political influencers (who can kill it)  → Current solution advocates (status quo beneficiaries) Document each person's personal win/loss if change happens. 4. 𝗣𝗥𝗘-𝗧𝗥𝗜𝗔𝗟 𝗔𝗚𝗥𝗘𝗘𝗠𝗘𝗡𝗧 Have legal review BEFORE starting: "We typically have legal review the agreement structure ahead of time so there are no surprises and to save us both time so we can hit the deadline of December 1st you set. Would you be open to this during the trial?" 5. 𝗖𝗨𝗥𝗥𝗘𝗡𝗧 𝗩𝗘𝗡𝗗𝗢𝗥 𝗦𝗧𝗥𝗔𝗧𝗘𝗚𝗬 Ask: → "Have you discussed these challenges with your current vendor?"  → "What was their response?"  → "What specific capabilities do they lack?" Document these to prevent the "let's try our current vendor" objection. RESULTS from this framework: ✅ Trial conversion: 32% to 83% in 60 days  ✅ Average deal size: +40%  ✅ Sales cycle: -37%  ✅ Forecast accuracy: +92%  ✅ Time on unsuccessful trials: -43% — Hey Sales Leaders! Want to see how we can install these kinds of results into your org? Go here: https://lnkd.in/ghh8VCaf

  • View profile for Yamini Rangan
    Yamini Rangan Yamini Rangan is an Influencer
    167,680 followers

    It’s the time of the year for performance reviews. Every year, I remind myself that giving feedback comes down to this: “radical candor” plus “radical compassion.” If you are too candid/direct, you will make your team feel defensive. But if you soften your feedback too much (which I have seen too many leaders do), your message will not be clear. The net effect if you don't get the balance right is that your team will not grow. It’s a difficult balance to strike. We’ve all had moments where we’ve held back the feedback we planned to give because we don’t want to hurt someone’s feelings. But the truth is, when you deliver feedback from a place of wanting to help someone reach their potential, that actually builds trust. I always start there - I make sure that my team knows that I am deeply committed to their growth. So this performance review season, don’t be afraid to be direct. But remember: being direct does not mean being harsh. Show the person you care about their growth and then follow it up with a plan to help them develop. “Radical candor” plus “radical compassion” is the feedback formula that works! What mindset are you taking into this performance review season?

  • View profile for Sohrab Rahimi

    Director, AI/ML Lead @ Google

    22,969 followers

    Evaluating LLMs is hard. Evaluating agents is even harder. This is one of the most common challenges I see when teams move from using LLMs in isolation to deploying agents that act over time, use tools, interact with APIs, and coordinate across roles. These systems make a series of decisions, not just a single prediction. As a result, success or failure depends on more than whether the final answer is correct. Despite this, many teams still rely on basic task success metrics or manual reviews. Some build internal evaluation dashboards, but most of these efforts are narrowly scoped and miss the bigger picture. Observability tools exist, but they are not enough on their own. Google’s ADK telemetry provides traces of tool use and reasoning chains. LangSmith gives structured logging for LangChain-based workflows. Frameworks like CrewAI, AutoGen, and OpenAgents expose role-specific actions and memory updates. These are helpful for debugging, but they do not tell you how well the agent performed across dimensions like coordination, learning, or adaptability. Two recent research directions offer much-needed structure. One proposes breaking down agent evaluation into behavioral components like plan quality, adaptability, and inter-agent coordination. Another argues for longitudinal tracking, focusing on how agents evolve over time, whether they drift or stabilize, and whether they generalize or forget. If you are evaluating agents today, here are the most important criteria to measure: • 𝗧𝗮𝘀𝗸 𝘀𝘂𝗰𝗰𝗲𝘀𝘀: Did the agent complete the task, and was the outcome verifiable? • 𝗣𝗹𝗮𝗻 𝗾𝘂𝗮𝗹𝗶𝘁𝘆: Was the initial strategy reasonable and efficient? • 𝗔𝗱𝗮𝗽𝘁𝗮𝘁𝗶𝗼𝗻: Did the agent handle tool failures, retry intelligently, or escalate when needed? • 𝗠𝗲𝗺𝗼𝗿𝘆 𝘂𝘀𝗮𝗴𝗲: Was memory referenced meaningfully, or ignored? • 𝗖𝗼𝗼𝗿𝗱𝗶𝗻𝗮𝘁𝗶𝗼𝗻 (𝗳𝗼𝗿 𝗺𝘂𝗹𝘁𝗶-𝗮𝗴𝗲𝗻𝘁 𝘀𝘆𝘀𝘁𝗲𝗺𝘀): Did agents delegate, share information, and avoid redundancy? • 𝗦𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗼𝘃𝗲𝗿 𝘁𝗶𝗺𝗲: Did behavior remain consistent across runs or drift unpredictably? For adaptive agents or those in production, this becomes even more critical. Evaluation systems should be time-aware, tracking changes in behavior, error rates, and success patterns over time. Static accuracy alone will not explain why an agent performs well one day and fails the next. Structured evaluation is not just about dashboards. It is the foundation for improving agent design. Without clear signals, you cannot diagnose whether failure came from the LLM, the plan, the tool, or the orchestration logic. If your agents are planning, adapting, or coordinating across steps or roles, now is the time to move past simple correctness checks and build a robust, multi-dimensional evaluation framework. It is the only way to scale intelligent behavior with confidence.

  • View profile for Lauren McGoodwin

    Principle Content Strategist @ Atlassian | Brand & Content Marketing | Speaker & Author | Career Contessa Podcast Host

    30,964 followers

    I’ve been interviewing candidates for a new role and there’s one thing I’ve seen 90% of them struggle with: sharing the story of their career achievements. But don’t worry—I’ve got a simple hack that can help you overcome it: ✏️ Create a monthly ritual to review and document every significant work win, and turn each into a mini-case study. Documenting your wins regularly will save you HOURS when you prep for your next interview—plus it’s great fodder for: ⤷ your annual performance review ⤷ your 1x1s with your manager ⤷ your resume Here’s my 3-step process: 1️⃣ Weekly Check-in: Turn work ➡️ wins ⤷ Start a weekly habit of documenting your wins (grab my free template in the comments). ⤷ Block 30 minutes on your calendar every Friday to hold yourself accountable. ⤷ Ask yourself, “What did I accomplish this week that moved the needle?”   2️⃣ Monthly Recap: Turn wins ➡️ headlines ⤷ Identify 1–2 significant achievements and summarize them using this formula: [Action Verb] + [Specific Metric] + [Timeframe] + [Business Impact] ⤷ Make a bullet-point list (so you can stay organized and repurpose it for your resume later!) ⤷ Include dates and timelines for your own records—you’ll use them in step 3.   3️⃣ Quarterly Story-Building: Headlines ➡️ stories ⤷ Identify your top 3 quarterly wins. ⤷ Start a fresh document and map out each of those wins using the STAR method: ️ ⭐ Situation: What was the context? ️⭐ Task: What was your specific responsibility? ⭐ Action: What steps did you take? ⭐ Result: What measurable outcome did you achieve? ⤷ Ask AI to help you share that information as a story. Here’s the prompt I like to use: ✍ Can you help me turn this achievement into a story using the STAR framework for an upcoming interview for a [title here] role? Please keep it concise. [paste win]   Here’s what this looks like in action 👇 ⤷ Weekly win: March ’23 → Decreased CPA by 28% & increased conversion by 15% ⤷ Monthly recap: Optimized paid search campaigns in March 2023 that decreased CPA by 28% while increasing conversions by 15%, resulting in higher profit margins for the company. ⤷ Quarterly story: When I joined the marketing team in January 2023, our paid search campaigns were generating leads but at a high CPA, with budget constraints approaching in Q2.I was tasked with reducing CPA without sacrificing lead volume. In March 2023, I audited our campaigns and implemented three key changes: restructured ad groups with tightly-themed keywords, refined match types with strategic negative keywords, and A/B tested value-focused ad copy. By month-end, these optimizations decreased cost-per-acquisition by 28% while increasing conversion volume by 15%, saving budget and creating a scalable framework for future campaigns. What are your tips for storytelling in your interviews? I’d love to hear them. 

  • View profile for Ann Hiatt

    Consultant to scaling CEOs | Former Right Hand to Jeff Bezos of Amazon & Eric Schmidt of Google | Weekly HBR contributor | Author of Bet on Yourself

    24,618 followers

    Unlock the Power of High-Quality Performance Reviews 'Tis the season for annual performance reviews. They are dreaded by some (both managers and direct reports alike), but a GOLDEN opportunity for growth, alignment and acceleration when done right! When I became a people manager for the first time I had no formal training on how to do a formal performance evaluation which made it more an intimidating and time consuming process than effective. It took me a while to develop some best practices which I still use today. Here are some actionable tips for how to make these conversations transformative instead of transactional: Best Practices for Managers: 1️⃣ Make it a Dialogue, Not a Monologue: Listen as much as you speak. Performance reviews should be a two-way street. 2️⃣ Focus on Specifics: Give actionable, evidence-based feedback tied to clear examples—not vague generalizations. 3️⃣ Balance Praise with Growth Opportunities: Celebrate wins but also highlight areas for improvement with a clear path forward. 4️⃣ Set Goals, Not Just Grades: Use reviews to align on SMART goals for the future. 5️⃣ Document & Follow Up: Don’t let feedback vanish post-meeting. Document outcomes and revisit them regularly. Common Mistakes to Avoid: 🚫 Waiting Until Review Time: Feedback should be ongoing—not a once-a-year surprise. 🚫 Being Too General: Saying "Good job" or "Needs improvement" without specifics leaves employees guessing. 🚫 Avoiding Tough Conversations: Constructive feedback can be uncomfortable, but it’s essential for growth. 🚫 Ignoring Employee Input: This isn’t just your show—make space for their perspective! Tips for Employees: Get Better Feedback 1️⃣ Be Proactive: Ask for feedback regularly—not just during reviews. Questions like, “What’s one thing I could do better?” shows initiative and openness. 2️⃣ Come Prepared: Bring accomplishments, challenges, and goals to the table. Show ownership of your growth. 3️⃣ Clarify Expectations: Ask, “What does success look like in my role / on this project?" This helps align your work with manager expectations. Year-Round Impact ✔️ Schedule Regular Check-Ins: Quarterly or monthly conversations keep feedback fresh and actionable. ✔️ Use Tools to Track Progress: Utilize shared documents or platforms to monitor goals throughout the year. ✔️ Create a Feedback Culture: Encourage real-time recognition and coaching on a weekly basis. A high-quality performance review isn’t just a meeting—it’s a tool for growth, alignment, and stronger relationships. Let’s move away from the “annual checkbox” and toward continuous improvement! What’s your secret to impactful performance reviews? Drop your tips in the comments! #Leadership #Feedback #PerformanceManagement #CareerGrowth

  • View profile for Ayoub Fandi

    GRC Engineering Lead @ GitLab | GRC Engineer Podcast and Newsletter | Engineering the Future of GRC

    27,579 followers

    Before you automate anything, answer this: Can you document your process in 10 steps? If not, automation will just replicate your chaos faster. 🔧 Most GRC teams get this backwards They spend weeks building AI validators, evidence collectors, or risk scorers. Then wonder why outputs are inconsistent, inaccurate, or unusable. The problem isn't the AI. It's the workflow underneath. The workflow audit comes first. The automation comes second. 📧 This week in GRC Engineer: "Engineer Your GRC Process Before You Automate It" The 30-minute audit that shows whether your workflows are ready for automation: ✅ Input Clarity - Do you know what data you actually need? ✅ Process Definition - Can someone else follow your steps and get the same result? ✅ Output Consistency - Does the same request produce the same format every time? ✅ Repeatability - Can anyone execute this without tribal knowledge? Copy-paste checklist included. Score your workflows. Fix one thing this week. Read here: https://lnkd.in/e_-zR2Rv Last week: Fixed your prompts This week: Audited your workflows Next week: Validation frameworks to ensure you can scale automation The GRC professionals who master process engineering + AI scaffolding will define the next decade. #GRCEngineering #ProcessDesign #Automation

  • View profile for Gopal A Iyer

    Executive Coach (ICF-PCC | EMCC SP) | Author: The Other Half of Success | Helping CXOs & Founders Realign People, Purpose & Performance | Culture Transformation | TEDx Speaker | IIMK | Stanford GSB

    46,264 followers

    Imagine you're walking into a meeting room, knowing you're about to discuss the annual performance feedback with one of your team members. Your palms are sweaty, and your heart is racing—not because you're unprepared, but because you're unsure of how the person would take the feedback. Feedback sessions can be nerve-wracking for both the giver and the receiver. But what if someone told you that feedback, when done correctly, could actually be a powerful tool to foster personal growth and team success? People at large often view feedback as a daunting task. The Biggest Myth is the common misconception that feedback is about the individual rather than their behaviours. Many leaders also hesitate to give feedback, fearing that it might hurt feelings or demotivate team members. However, the real issue is typically a lack of preparation. Effective feedback requires observation—increasingly difficult in today's hybrid work environments—data to back up claims and a clear understanding of expectations. Without these elements, feedback sessions can seem unfounded and personal rather than objective and developmental. When I took over team management for the first time in 2008, I was trained to use various methods of giving feedback, including the well-known Sandwich or Hamburger Technique. However, one model that has stayed with me is the Situation-Behaviour-Impact (SBI) model. It helped me focus on specific situations, the behaviours I observe, and the impacts these behaviours have on the team or project. Focusing on instances and outcomes allows feedback to be less about the person and more about their actions within a context, making it easier to digest and act upon. Instead of "You're not collaborating effectively," which is vague and can feel like a personal attack, one can say, "During yesterday's meeting, when you interrupted your colleague, it created tension and disrupted the workflow. Let's explore ways to express your ideas while also encouraging others to share theirs." This not only clarifies the issue but also provides a constructive pathway for improvement. Fostering an environment where feedback is regularly shared is an integral part of the leader's role. Top leaders ensure that feedback is a regular weekly process, not just a quarterly event. This shift in perspective can significantly change how team members perceive and react to feedback. The art of giving feedback is crucial for leadership and team development. Have you or someone in your team struggled to give or receive feedback? How do you incorporate feedback into your daily routine to create a positive impact on your team? If you like this, follow Gopal A Iyer for more. In Pic: A Veg Burger at Cafe Trofima in Mumbai - Inspiration for today's post! :) #Feedback #Annualperformancereviews #LIPostingChallengeIndia

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    51,175 followers

    A "sampled success metric" is a performance measure or evaluation criterion calculated from a sample or subset of data rather than the entire population. Its calculation often involves higher costs per sample, such as manual review, leading to a trade-off between sample size and metric accuracy/sensitivity. In this tech blog, written by the data science team from Shopify, the discussion revolves around how the team leverages Monte Carlo simulation to understand metric variability under various scenarios to help the team make the right trade-offs. Initially, the team defines simulation metrics to describe the variability of the sampled success metric. For instance, if the actual success metric is decreasing over time, the metric could indicate how many months of sampled success metric would show a decrease, termed as "1-month decreases observed". Then, the team defines the distribution to run the Monte Carlo simulation. Monte Carlo simulation, a computational technique using random sampling to estimate outcomes of complex systems or processes with uncertain inputs, draws samples from a dedicated distribution that matches business needs. Based on past observations, the team’s application follows a Poisson distribution. Next comes the massive simulation phase, where the team runs multiple simulations for one parameter and then changes various parameters to simulate different scenarios. The goal is to quantify how much the sample mean will differ from the underlying population mean given realistic assumptions. The final result provides a clear statistical distribution of how much extra sample size could lead to metrics variability decrease and increased accuracy. This case study demonstrates that Monte Carlo simulation could be a valuable toolkit to add to your decision-making and data science knowledge. #datascience #analytics #metrics #algorithms #simulation #montecarlo #decisionmaking – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/dKnrZzzV 

  • View profile for Aditya Maheshwari

    Helping SaaS teams retain better, grow faster | CS Leader, APAC | Creator of Tidbits | Follow for CS, Leadership & GTM Playbooks

    20,470 followers

    50% of employees say performance reviews are useless. Here's how to fix that. I've spoken to hundreds of people over the years. The pattern is painfully consistent. Manager talks. Employee nods. Nothing changes. But the data is even more concerning: more than half of employees feel formal reviews contribute nothing to their growth. No surprise there. The problem exists on both sides of the table: - Employees dump all responsibility for these sessions on their managers - Managers have zero training on how to make these conversations meaningful The result? Monologues that waste everyone's time. But here's the thing about great performance reviews: They're not monologues—they're conversations. Want to transform your review sessions into career accelerators? Here's how: For managers: - Implement structured frameworks like McKinsey & Company's OILS (Observation, Impact, Listening, Solutions/Strategy) - Work together to identify what's actually causing performance challenges (Is it time management? Communication gaps?) - Establish clear priorities with specific targets and timelines for the next period For employees: - Come prepared with defined goals and the specific skills you need to develop in the next 6-12 months - Bring a concise, tactical action plan to ensure alignment and measurable progress Whatever it takes, remember that performance growth is a two-way street. These sessions should empower both sides to grow, not just check administrative boxes. What's your best tip for making reviews actually matter? I would love to hear. __ ♻️ Reshare this post if it can help others! __ ▶️ Want to see more content like this? You should join 2297+ members in the Tidbits WhatsApp Community! 💥 [link in the comments section]

  • View profile for Greg Coquillo
    Greg Coquillo Greg Coquillo is an Influencer

    Product Leader @AWS | Startup Investor | 2X Linkedin Top Voice for AI, Data Science, Tech, and Innovation | Quantum Computing & Web 3.0 | I build software that scales AI/ML Network infrastructure

    226,669 followers

    ✋Before rushing into training models, do not skip the part that actually determines whether the model is useful: Measuring performance. Without the right metrics you are not evaluating a model, you are just validating your assumptions. Check out theses nine metrics every ML practitioner should understand and use with intention 👇 1. Accuracy Good for balanced datasets. Misleading when classes are skewed. 2. Precision Of the samples you predicted as positive, how many were correct. Important when false positives are costly. 3. Recall Of the samples that were actually positive, how many you caught. Critical when false negatives are dangerous. 4. F1 Score Balances precision and recall. Reliable when you need a single metric that reflects both types of error. 5. ROC AUC Measures how well a model separates classes across thresholds. Useful for model comparison independent of cutoffs. 6. Confusion Matrix Exposes the exact distribution of true positives, false positives, true negatives, and false negatives. Great for diagnosing failure modes. 7. Log Loss Penalizes confident wrong predictions. Important for probabilistic models where calibration matters. 8. MAE (Mean Absolute Error) Average of absolute errors. Simple, interpretable, and robust for many regression problems. 9. RMSE (Root Mean Squared Error) Heavily penalizes large errors. Best when you care about avoiding big misses. Strong ML systems are built by measuring the right things. These metrics show you how your model behaves, where it fails, and whether it is ready for production. What else would you add? #AI #ML

Explore categories