Editor’s Note: This post is republished with permission from Trust Insights, a company that helps marketers solve/achieve issues with collecting data and measuring their digital marketing efforts.
from MAII https://ift.tt/HrcfxFR
via IFTTT
Editor’s Note: This post is republished with permission from Trust Insights, a company that helps marketers solve/achieve issues with collecting data and measuring their digital marketing efforts.
from MAII https://ift.tt/HrcfxFR
via IFTTT
Despite what you may have heard, your small business can absolutely use artificial intelligence (AI) to decrease costs and increase revenue.
from MAII https://ift.tt/RT4aJxU
via IFTTT
Artificial intelligence can give you analytics superpowers.
from MAII https://ift.tt/lmA5ni8
via IFTTT
The deployment of powerful AI systems has enriched our understanding of safety and misuse far more than would have been possible through research alone. Notably:
Here, we describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models.
Over the past two years, we’ve learned a lot about how language models can be used and abused—insights we couldn’t have gained without the experience of real-world deployment. In June 2020, we began giving access to developers and researchers to the OpenAI API, an interface for accessing and building applications on top of new AI models developed by OpenAI. Deploying GPT-3, Codex, and other models in a way that reduces risks of harm has posed various technical and policy challenges.
Large language models are now capable of performing a very wide range of tasks, often out of the box. Their risk profiles, potential applications, and wider effects on society remain poorly understood. As a result, our deployment approach emphasizes continuous iteration, and makes use of the following strategies aimed at maximizing the benefits of deployment while reducing associated risks:
There is no silver bullet for responsible deployment, so we try to learn about and address our models’ limitations, and potential avenues for misuse, at every stage of development and deployment. This approach allows us to learn as much as we can about safety and policy issues at small scale and incorporate those insights prior to launching larger-scale deployments.
While not exhaustive, some areas where we’ve invested so far include[1]:
Since each stage of intervention has limitations, a holistic approach is necessary.
There are areas where we could have done more and where we still have room for improvement. For example, when we first worked on GPT-3, we viewed it as an internal research artifact rather than a production system and were not as aggressive in filtering out toxic training data as we might have otherwise been. We have invested more in researching and removing such material for subsequent models. We have taken longer to address some instances of misuse in cases where we did not have clear policies on the subject, and have gotten better at iterating on those policies. And we continue to iterate towards a package of safety requirements that is maximally effective in addressing risks, while also being clearly communicated to developers and minimizing excessive friction.
Still, we believe that our approach has enabled us to measure and reduce various types of harms from language model use compared to a more hands-off approach, while at the same time enabling a wide range of scholarly, artistic, and commercial applications of our models.[2]
OpenAI has been active in researching the risks of AI misuse since our early work on the malicious use of AI in 2018 and on GPT-2 in 2019, and we have paid particular attention to AI systems empowering influence operations. We have worked with external experts to develop proofs of concept and promoted careful analysis of such risks by third parties. We remain committed to addressing risks associated with language model-enabled influence operations and recently co-organized a workshop on the subject.[3]
Yet we have detected and stopped hundreds of actors attempting to misuse GPT-3 for a much wider range of purposes than producing disinformation for influence operations, including in ways that we either didn’t anticipate or which we anticipated but didn’t expect to be so prevalent.[4] Our use case guidelines, content guidelines, and internal detection and response infrastructure were initially oriented towards risks that we anticipated based on internal and external research, such as generation of misleading political content with GPT-3 or generation of malware with Codex. Our detection and response efforts have evolved over time in response to real cases of misuse encountered “in the wild” that didn’t feature as prominently as influence operations in our initial risk assessments. Examples include spam promotions for dubious medical products and roleplaying of racist fantasies.
To support the study of language model misuse and mitigation thereof, we are actively exploring opportunities to share statistics on safety incidents this year, in order to concretize discussions about language model misuse.
Many aspects of language models’ risks and impacts remain hard to measure and therefore hard to monitor, minimize, and disclose in an accountable way. We have made active use of existing academic benchmarks for language model evaluation and are eager to continue building on external work, but we have also have found that existing benchmark datasets are often not reflective of the safety and misuse risks we see in practice.[5]
Such limitations reflect the fact that academic datasets are seldom created for the explicit purpose of informing production use of language models, and do not benefit from the experience gained from deploying such models at scale. As a result, we've been developing new evaluation datasets and frameworks for measuring the safety of our models, which we plan to release soon. Specifically, we have developed new evaluation metrics for measuring toxicity in model outputs and have also developed in-house classifiers for detecting content that violates our content policy, such as erotic content, hate speech, violence, harassment, and self-harm. Both of these in turn have also been leveraged for improving our pre-training data[6]—specifically, by using the classifiers to filter out content and the evaluation metrics to measure the effects of dataset interventions.
Reliably classifying individual model outputs along various dimensions is difficult, and measuring their social impact at the scale of the OpenAI API is even harder. We have conducted several internal studies in order to build an institutional muscle for such measurement, but these have often raised more questions than answers.
We are particularly interested in better understanding the economic impact of our models and the distribution of those impacts. We have good reason to believe that the labor market impacts from the deployment of current models may be significant in absolute terms already, and that they will grow as the capabilities and reach of our models grow. We have learned of a variety of local effects to date, including massive productivity improvements on existing tasks performed by individuals like copywriting and summarization (sometimes contributing to job displacement and creation), as well as cases where the API unlocked new applications that were previously infeasible, such as synthesis of large-scale qualitative feedback. But we lack a good understanding of the net effects.
We believe that it is important for those developing and deploying powerful AI technologies to address both the positive and negative effects of their work head-on. We discuss some steps in that direction in the concluding section of this post.
In our Charter, published in 2018, we say that we “are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.” We then published a detailed analysis of competitive AI development, and we have closely followed subsequent research. At the same time, deploying AI systems via the OpenAI API has also deepened our understanding of the synergies between safety and utility.
For example, developers overwhelmingly prefer our InstructGPT models—which are fine-tuned to follow user intentions[7]—over the base GPT-3 models. Notably, however, the InstructGPT models were not originally motivated by commercial considerations, but rather were aimed at making progress on long-term alignment problems. In practical terms, this means that customers, perhaps not surprisingly, much prefer models that stay on task and understand the user's intent, and models that are less likely to produce outputs that are harmful or incorrect.[8] Other fundamental research, such as our work on leveraging information retrieved from the Internet in order to answer questions more truthfully, also has potential to improve the commercial utility of AI systems.[9]
These synergies will not always occur. For example, more powerful systems will often take more time to evaluate and align effectively, foreclosing immediate opportunities for profit. And a user's utility and that of society may not be aligned due to negative externalities—consider fully automated copywriting, which can be beneficial for content creators but bad for the information ecosystem as a whole.
It is encouraging to see cases of strong synergy between safety and utility, but we are committed to investing in safety and policy research even when they trade off with commercial utility.
Each of the lessons above raises new questions of its own. What kinds of safety incidents might we still be failing to detect and anticipate? How can we better measure risks and impacts? How can we continue to improve both the safety and utility of our models, and navigate tradeoffs between these two when they do arise?
We are actively discussing many of these issues with other companies deploying language models. But we also know that no organization or set of organizations has all the answers, and we would like to highlight several ways that readers can get more involved in understanding and shaping our deployment of state of the art AI systems.
First, gaining first-hand experience interacting with state of the art AI systems is invaluable for understanding their capabilities and implications. We recently ended the API waitlist after building more confidence in our ability to effectively detect and respond to misuse. Individuals in supported countries and territories can quickly get access to the OpenAI API by signing up here.
Second, researchers working on topics of particular interest to us such as bias and misuse, and who would benefit from financial support, can apply for subsidized API credits using this form. External research is vital for informing both our understanding of these multifaceted systems, as well as wider public understanding.
Finally, today we are publishing a research agenda exploring the labor market impacts associated with our Codex family of models, and a call for external collaborators on carrying out this research. We are excited to work with independent researchers to study the effects of our technologies in order to inform appropriate policy interventions, and to eventually expand our thinking from code generation to other modalities.
If you’re interested in working to responsibly deploy cutting-edge AI technologies, apply to work at OpenAI!
source https://openai.com/blog/language-model-safety-and-misuse/
Core to our mission of ensuring that artificial general intelligence benefits all of humanity is understanding the economic impacts that our models have on individuals and society as a whole. Developing tools to rigorously measure the economic impacts of our models is essential to making smarter development and deployment decisions and critical to informing public policy options that maximize human prosperity and minimize the risk of economic harms from AI. Our ability to generate high quality evidence to inform these decisions will be greatly enhanced by developing a range of productive research partnerships, and we firmly believe that AI developers need to support external researchers undertaking this work, rather than exclusively conducting research in-house.
Under this premise, you can see our first public research agenda on these topics. This describes our preliminary priorities for research on the economic impacts of code generation models broadly. Today, we are excited to complement this research agenda with concrete action to facilitate improved measurement of the economic impacts of our models. We are launching a call for expressions of interest from researchers interested in evaluating the economic impact of Codex—our AI system that translates natural language to code. If you are a PhD level researcher (including current doctoral students) interested in collaborating on this research, we would encourage you to fill out the expression of interest form.
As an AI research and deployment company, OpenAI recognizes that our decisions around AI system design and deployment can influence economic impacts and the distribution of economic benefits from advances in AI. Despite remarkable technological progress over the past several decades, gains in economic prosperity have not been widely distributed. In the US, trends in both income and wealth inequality over the last forty years demonstrate a worrying pace of economic divergence and uneven access to opportunity. While recent evidence suggests that there is little immediate risk of widespread technological unemployment due to AI, it is clear that the labor market impacts of increasingly advanced AI will vary widely across different types of workers. Unemployment shocks, even if transitory, have been shown to have widespread negative effects on individual wellbeing, and increasing economic inequality may amplify societal cleavages.
We are eager to support and conduct research that has the potential to impact decision-making on three axes:
While we don’t anticipate that the current capabilities of Codex could threaten large-scale economic disruption, future capabilities of code generation and other large language model applications could. We need to engage in research about the economic impact of our models today in order to be positioned to assess the safety of developing and releasing more capable systems in the future. Codex provides a tractable opportunity to establish the foundation for this research going forward.
As an external research collaborator, you would be connected (via OpenAI) to firms that are currently using Codex models or that plan to in the future. You would have the opportunity to work with OpenAI and these firms to implement research projects focused on empirically measuring the impact of Codex on outcomes like worker and firm productivity, labor demand, and skill development. Where necessary and when possible, OpenAI would help facilitate data access to enable impactful research and would provide academic access to Codex and future models. OpenAI will also provide research management resources to external researchers, and researchers would have the freedom to publish their results independently or as co-authors with collaborators at OpenAI. Finally, we intend to facilitate discussions between external researchers, AI developers, AI-adopting firms, and workers in various industries that have been affected by advances in AI in an effort to widen the range of perspectives that can shape the path of AI development and deployment.
If you are a researcher considering submitting an expression of interest, please fill out this form. Additionally, consider emailing us your questions at econ@openai.com to learn more about our goals for economic impacts research and how you can be involved.
If you are a company or user of Codex models and want to learn how you can contribute to this work moving forward, please fill out this form.
If you would like to submit an expression of interest to be a Research Collaborator please use this form.
We are currently seeking submissions from PhD-level researchers, including current doctoral students. When evaluating expressions of interest, we will assess your background and experience, clarity of motivation to collaborate with OpenAI, and both the clarity and decision-relevance of your research interests related to the economic impact of Codex.
If you are a company or user of Codex models and want to learn how you can contribute to this work moving forward, please fill out this form.
We are in the process of connecting researchers with firms that are best equipped to support particular research interests. If you’re interested in learning more about how your organization can support or sponsor research on economic impacts of AI systems, please contact us here.
If you have any questions about the submission forms or the call for expressions of interest, please contact us at econ@openai.com.
Artificial intelligence (AI) is helping companies boost lead volume, improve close rate, and supercharge overall sales performance.
from MAII https://ift.tt/RbGDShp
via IFTTT
Many of us aren’t great at content creation or content strategy…
from MAII https://ift.tt/nlmI3Us
via IFTTT