Generative AI has so many benefits for marketers. But with the rapid pace of adoption—often with little to no oversight–-issues are quickly arising.
![]()
from Marketing AI Institute | Blog https://ift.tt/2VwbAQ9
via IFTTT
Generative AI has so many benefits for marketers. But with the rapid pace of adoption—often with little to no oversight–-issues are quickly arising.
![]()
from Marketing AI Institute | Blog https://ift.tt/2VwbAQ9
via IFTTT
Microsoft and Apple just announced stunning advancements in voice AI that are going to change how marketers and businesses work.
![]()
from Marketing AI Institute | Blog https://ift.tt/ldZmIFS
via IFTTT
Microsoft just made a huge play into AI for search that is going to have major consequences for marketers and business leaders.
![]()
from Marketing AI Institute | Blog https://ift.tt/JC2peXh
via IFTTT
It’s been another exciting week in the world of artificial intelligence. What does it all mean for marketers? The guys break it down on The Marketing AI Show.
![]()
from Marketing AI Institute | Blog https://ift.tt/U68pe0Q
via IFTTT
OpenAI researchers collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory to investigate how large language models might be misused for disinformation purposes. The collaboration included an October 2021 workshop bringing together 30 disinformation researchers, machine learning experts, and policy analysts, and culminated in a co-authored report building on more than a year of research. This report outlines the threats that language models pose to the information environment if used to augment disinformation campaigns and introduces a framework for analyzing potential mitigations. Read the full report at here.
As generative language models improve, they open up new possibilities in fields as diverse as healthcare, law, education and science. But, as with any new technology, it is worth considering how they can be misused. Against the backdrop of recurring online influence operations—covert or deceptive efforts to influence the opinions of a target audience—the paper asks:
How might language models change influence operations, and what steps can be taken to mitigate this threat?
Our work brought together different backgrounds and expertise—researchers with grounding in the tactics, techniques, and procedures of online disinformation campaigns, as well as machine learning experts in the generative artificial intelligence field—to base our analysis on trends in both domains.
We believe that it is critical to analyze the threat of AI-enabled influence operations and outline steps that can be taken before language models are used for influence operations at scale. We hope our research will inform policymakers that are new to the AI or disinformation fields, and spur in-depth research into potential mitigation strategies for AI developers, policymakers, and disinformation researchers.
When researchers evaluate influence operations, they consider the actors, behaviors, and content. The widespread availability of technology powered by language models has the potential to impact all three facets:
Actors: Language models could drive down the cost of running influence operations, placing them within reach of new actors and actor types. Likewise, propagandists-for-hire that automate production of text may gain new competitive advantages.
Behavior: Influence operations with language models will become easier to scale, and tactics that are currently expensive (e.g., generating personalized content) may become cheaper. Language models may also enable new tactics to emerge—like real-time content generation in chatbots.
Content: Text creation tools powered by language models may generate more impactful or persuasive messaging compared to propagandists, especially those who lack requisite linguistic or cultural knowledge of their target. They may also make influence operations less discoverable, since they repeatedly create new content without needing to resort to copy-pasting and other noticeable time-saving behaviors.
Our bottom-line judgment is that language models will be useful for propagandists and will likely transform online influence operations. Even if the most advanced models are kept private or controlled through application programming interface (API) access, propagandists will likely gravitate towards open-source alternatives and nation states may invest in the technology themselves.
Many factors impact whether, and the extent to which, language models will be used in influence operations. Our report dives into many of these considerations. For example:
While we expect to see diffusion of the technology as well as improvements in the usability, reliability, and efficiency of language models, many questions about the future remain unanswered. Because these are critical possibilities that can change how language models may impact influence operations, additional research to reduce uncertainty is highly valuable.
To chart a path forward, the report lays out key stages in the language model-to-influence operation pipeline. Each of these stages is a point for potential mitigations.To successfully wage an influence operation leveraging a language model, propagandists would require that: (1) a model exists, (2) they can reliably access it, (3) they can disseminate content from the model, and (4) an end user is affected. Many possible mitigation strategies fall along these four steps, as shown below.
| Stage in the pipeline | 1. Model Construction | 2. Model Access | 3. Content Dissemination | 4. Belief Formation |
| Illustrative Mitigations | AI developers build models that are more fact-sensitive. | AI providers impose stricter usage restrictions on language models. | Platforms and AI providers coordinate to identify AI content. | Institutions engage in media literacy campaigns. |
| Developers spread radioactive data to make generative models detectable. | AI providers develop new norms around model release. | Platforms require “proof of personhood” to post. | Developers provide consumer focused AI tools. | |
| Governments impose restrictions on data collection. | AI providers close security vulnerabilities. | Entities that rely on public input take steps to reduce their exposure to misleading AI content. | ||
| Governments impose access controls on AI hardware. | Digital provenance standards are widely adopted. |
Just because a mitigation could reduce the threat of AI-enabled influence operations does not mean that it should be put into place. Some mitigations carry their own downside risks. Others may not be feasible. While we do not explicitly endorse or rate mitigations, the paper provides a set of guiding questions for policymakers and others to consider:
We hope this framework will spur ideas for other mitigation strategies, and that the guiding questions will help relevant institutions begin to consider whether various mitigations are worth pursuing.
This report is far from the final word on AI and the future of influence operations. Our aim is to define the present environment and to help set an agenda for future research. We encourage anyone interested in collaborating or discussing relevant projects to connect with us. For more, read the full report at here.
Josh A. Goldstein (Georgetown University’s Center for Security and Emerging Technology)
Girish Sastry (OpenAI)
Micah Musser (Georgetown University’s Center for Security and Emerging Technology)
Renée DiResta (Stanford Internet Observatory)
Matthew Gentzel (Longview Philanthropy) (work done at OpenAI)
Katerina Sedova (US Department of State) (work done at Center for Security and Emerging Technology prior to government service)
You’ve seen amazing generative AI tools like ChatGPT and Jasper. Now, you can use AI to produce text outputs with actual fact-checking behind them.
![]()
from Marketing AI Institute | Blog https://ift.tt/opbUxD2
via IFTTT
At Marketing AI Institute, we are seeing strong quantitative and qualitative evidence that indicates we are at an AI inflection point where the technology begins to go fully mainstream in 2023—with profound effects on…well…everything.
![]()
from Marketing AI Institute | Blog https://ift.tt/qKoFmzg
via IFTTT
ChatGPT is just the tip of the AI iceberg for marketers and business leaders. It’s the shiny object that’s captured everyone’s attention, and rightly so.
But, the real story of AI, and its impact on your career and business, lies beneath the surface and has been building at an accelerating rate for the last decade.
![]()
from Marketing AI Institute | Blog https://ift.tt/uvKm4S8
via IFTTT
Want to create professional videos 10X faster—without film crews or production costs? Artificial intelligence can help.
![]()
from Marketing AI Institute | Blog https://ift.tt/lck82QH
via IFTTT
Artificial intelligence is going to disrupt a lot of marketing agencies in 2023.
![]()
from Marketing AI Institute | Blog https://ift.tt/FUNB470
via IFTTT