̽»¨ÊÓÆµ, in conjunction with its vendor partners, sponsors hundreds of events each year, ranging from webcasts and tradeshows to executive roundtables and technology forums.
In this webinar, we demonstrated a scalable, repeatable methodology for leveraging Large Language Models (LLMs) to extract intelligence from social media during civil unrest and politically sensitive events. Through a live demo, we walked through our OSINT workflow—from planning and collection to knowledge extraction and dissemination—showing how it could be applied across defense, public-health and enterprise missions to detect narrative manipulation early, improve response times and ensure ethical AI deployment.
During this session, attendees learned:
Best practices for ensuring operational security (OPSEC), human-in-the-loop validation, and ethical AI deployment
The evolution of bot behavior and disinformation—from early spam bots to coordinated, AI-driven emotional manipulation
How to integrate LLM-powered prompts into social media intelligence workflows for faster, more accurate attribution and detection
Real-world lessons from the LAPD “No Kings” protest analysis, including early-alert capabilities, bot-network detection, and multi-source intelligence fusion
Fill out the form below to view this archived event.
Modernizing mission support functions offers an opportunity to improve overall mission outcomes. Adobe Document Cloud is uniquely positioned to help government agencies streamline business processes through digital document workflows. Read this eBook to learn how connecting mission support services with common tools and workflows leads to more visible and efficient agencies.
Fill out the form below to view this Resource.