̽»¨ÊÓÆµ, in conjunction with its vendor partners, sponsors hundreds of events each year, ranging from webcasts and tradeshows to executive roundtables and technology forums.
In this webinar, we demonstrated a scalable, repeatable methodology for leveraging Large Language Models (LLMs) to extract intelligence from social media during civil unrest and politically sensitive events. Through a live demo, we walked through our OSINT workflow—from planning and collection to knowledge extraction and dissemination—showing how it could be applied across defense, public-health and enterprise missions to detect narrative manipulation early, improve response times and ensure ethical AI deployment.
During this session, attendees learned:
Best practices for ensuring operational security (OPSEC), human-in-the-loop validation, and ethical AI deployment
The evolution of bot behavior and disinformation—from early spam bots to coordinated, AI-driven emotional manipulation
How to integrate LLM-powered prompts into social media intelligence workflows for faster, more accurate attribution and detection
Real-world lessons from the LAPD “No Kings” protest analysis, including early-alert capabilities, bot-network detection, and multi-source intelligence fusion
Fill out the form below to view this archived event.
Enterprises Firewall Comparative Security Map identifies the Total Cost per Protected Mbps and the Protection of the tested products. The report highlights ways in which firewalls perform best and offer recommendations on how to set up and test these security systems.