You can get started with AI in marketing right now, even if you don’t have a technical background.
from MAII https://ift.tt/2V85cRA
via IFTTT
You can get started with AI in marketing right now, even if you don’t have a technical background.
from MAII https://ift.tt/2V85cRA
via IFTTT
What are the best books about artificial intelligence available today?
from MAII https://ift.tt/aUQR1cz
via IFTTT
AiAdvertising is an AI-powered platform that takes the guesswork out of running ad campaigns.
from MAII https://ift.tt/eLhX8JW
via IFTTT
Natural language processing (NLP) can give marketers superpowers—if they approach the technology in a smart way.
from MAII https://ift.tt/k0v4aKN
via IFTTT
Brands need to start embracing a human + machine approach to marketing and sales. It’s the only way to thrive in the coming age of AI.
from MAII https://ift.tt/JFftW7i
via IFTTT
Artificial intelligence doesn’t have a heart, but it can tell who does.
from MAII https://ift.tt/XeaRf9r
via IFTTT
from MAII https://ift.tt/suxc9b7
via IFTTT
from MAII https://ift.tt/K97JCfj
via IFTTT
We built a neural theorem prover for Lean that learned to solve a variety of challenging high-school olympiad problems, including problems from the AMC12 and AIME competitions, as well as two problems adapted from the IMO.[1] The prover uses a language model to find proofs of formal statements. Each time we find a new proof, we use it as new training data, which improves the neural network and enables it to iteratively find solutions to harder and harder statements.
We achieved a new state-of-the-art (41.2% vs 29.3%) on the miniF2F benchmark, a challenging collection of high-school olympiad problems. Our approach, which we call statement curriculum learning, consists of manually collecting a set of statements of varying difficulty levels (without proof) where the hardest statements are similar to the benchmark we target. Initially our neural prover is weak and can only prove a few of them. We iteratively search for new proofs and re-train our neural network on the newly discovered proofs, and after 8 iterations, our prover ends up being vastly superior when tested on miniF2F.
Formal mathematics is an exciting domain to study because of (i) its richness, letting you prove arbitrary theorems which require reasoning, creativity and insight and (ii) its similarity to games—where AI has been spectacularly successful—in that it has an automated way of determining whether a proof is successful (i.e., verified by the formal system). As demonstrated in the trivial example below, proving a formal statement requires generating a sequence of proof steps, each proof step consisting in a call to a tactic.[2] These tactics take mathematical terms as arguments and each tactic call will transform the current statement to prove, into statements that are easier to prove, until nothing is left to prove.
Prove that if $|x – 2| = p$, where $x < 2$, then $x – p = 2 – 2p$.
theorem amc12_2000_p5 -- ← theorem name
(x p : ℝ) -- ← the statement we want
(h₀ : x < 2) -- to prove
(h₁ : abs (x - 2) = p) :
x - p = 2 - 2 * p :=
begin -- ← formal proof starts here
-- This first tactic requires that the prover invent
-- the term: `abs (x - 2) = -(x - 2)`.
have h₂ : abs (x - 2) = -(x - 2), {
apply abs_of_neg,
linarith,
},
rw h₁ at h₂,
-- At this stage the remaining goal to prove is:
-- `x - p = 2 - 2 * p` knowing that `p = -(x - 2)`.
linarith,
end
We observe that the capability to generate original mathematical terms required as arguments of tactics, which cannot be done without a neural language model, emerges from our training procedure. The proof below is an example of it: the proof step use n + 1
(entirely generated by our models) proposes to use n + 1
as a solution, the rest of the formal proof relying on the ring_exp
tactic to verify that it is indeed valid.
For all integers $n ≥ 9$, prove that $((n + 2)! −(n + 1)!) / n!$ is a perfect square.
theorem amc12b_2020_p6
(n : ℕ)
(h0 : 9 ≤ n) :
∃ x : ℕ, (x:ℝ)^2 =
(nat.factorial (n + 2) - nat.factorial (n + 1))
/ nat.factorial n :=
begin
-- The model directly proposes `n + 1` as solution.
use n + 1,
field_simp [nat.factorial_ne_zero, pow_succ'],
ring_exp
end
We also observe that our models and search procedure are capable of producing proofs that chain multiple non-trivial reasoning steps. In the proof below, the model starts by using contraposition leading to the existential statement (∃ (x : ℝ), f x ≠ a * x + b
). It then generates a witness for it with use (0 : ℝ)
and finishes the proof by leveraging the norm_num
tactic.
Let $f(x) = Ax + B$ and $g(x) = Bx + A$, where $A \ne B$. If $f(g(x)) – g(f(x)) = B – A$, prove that $A + B = 0$.
theorem mathd_train_algebra_217
(a b : ℝ)
(f g : ℝ → ℝ)
(h₀ : ∀ x, f x = a * x + b)
(h₁ : ∀ x, f x = b * x + a)
(h₂ : a ≠ b)
(h₃ : ∀ x, f (g x) - g (f x) = b - a) :
a + b = 0 :=
begin
revert h₀ h₁ h₂ h₃,
-- Initial contraposition.
contrapose!,
rintro ⟨h₀, ⟨h₁, h₂⟩⟩,
-- The model proposes `0` as witness for the current
-- goal that consists in `∃ (x : ℝ), f x ≠ a * x + b`.
use (0 : ℝ),
simp only [sub_eq_iff_eq_add, h₀, mul_zero, zero_add],
norm_num at h₀,
end
Our models, trained with statement curriculum learning, were able to close a variety of problems from training textbooks as well as AMC12 and AIME competitions, and 2 problems adapted from the IMO. We present below three examples of such generated proofs.
Suppose $a$, $b$, $c$ are the sides of a triangle.
Prove that $a^2(b + c − a) + b^2(c + a − b) + c^2(a + b − c) \leq 3abc$.
theorem imo_1964_p2
(a b c : ℝ)
(h₀ : 0 < a ∧ 0 < b ∧ 0 < c)
(h₁ : c < a + b)
(h₂ : b < a + c)
(h₃ : a < b + c) :
a^2 * (b + c - a) + b^2 * (c + a - b) + c^2 * (a + b - c)
≤ 3 * a * b * c :=
begin
-- Arguments to `nlinarith` are fully invented by our model.
nlinarith [sq_nonneg (b - a),
sq_nonneg (c - b),
sq_nonneg (c - a)]
end
Prove that $a2 + a4 + a6 + a8 + …+ a98 = 93$ if $a1$, $a2$, $a3…$ is an arithmetic progression with common difference $1$, and $a1 + a2 + a3 + … + a98 = 137$.
theorem aime_1984_p1
(u : ℕ → ℚ)
(h₀ : ∀ n, u (n + 1) = u n + 1)
(h₁ : ∑ k in finset.range 98, u k.succ = 137) :
∑ k in finset.range 49, u (2 * k.succ) = 93 :=
begin
rw finset.sum_eq_multiset_sum,
dsimp [finset.range] at h₁,
simp [h₀],
ring,
norm_num at h₁,
norm_num,
apply eq_of_sub_eq_zero,
{ simp only [*, abs_of_pos, add_zero] at *, linarith },
end
Adapted from IMO Longlist 1990 Problem 77[3]
For $a, b, c$ reals, prove that $(a^2 + ab + b^2)(b^2 + bc + c^2)(c^2 + ca + a^2) \geq (ab + bc + ca)^3$.
theorem imo_longlist_1990_p77
(a b c : ℝ) :
(a * b + b * c + c * a)^3 ≤
(a^2 + a * b + b^2) * (b^2 + b * c + c^2) *
(c^2 + c * a + a^2) :=
begin
-- The three initial steps use Cauchy–Schwarz to prove
-- `(a * b + b * c) ^ 2 ≤ (a ^ 2 + b ^ 2) * (b ^ 2 + c ^ 2)`
-- which is required for the final call to `nlinarith`.
let u : euclidean_space ℝ (fin 2) := ![a, b],
let v : euclidean_space ℝ (fin 2) := ![b, c],
have h₀ := real_inner_mul_inner_self_le u v,
simp [u, v, fin.sum_univ_succ,
←pow_two, ←pow_two, le_of_lt, mul_assoc] at h₀,
-- The model introduces another required cut (i.e. invent
-- the term `0 ≤ (c + a) * (c + a)` and proves it).
have h₃ : 0 ≤ (c + a) * (c + a),
{ nlinarith, },
have h₄ := sq_nonneg (a * b + b * c + c * a),
simp [sq, h₀, h₃, mul_add, add_mul] at h₄ ⊢,
nlinarith [sq_nonneg (b - a),
sq_nonneg (c - b),
sq_nonneg (a - c)]
end
Formal mathematics involves two main challenges that make a naive application of reinforcement learning unlikely to succeed.
In our work, we address the infinite action space problem by sampling actions from a language model as we search for a proof. Language models have the capability to generate the tactic calls as well as the original mathematical terms often required as arguments. Our basis for addressing the lack of self-play is the observation that the key role of self-play in 2-player games is to provide an unsupervised curriculum. Our methodology proposes to replace this unsupervised curriculum with an auxiliary set of problem statements (without requiring proofs) of varying difficulty. We empirically show that, when the difficulty of these auxiliary problems is varied enough, our training procedure is able to solve a curriculum of increasingly difficult problems, eventually generalizing to the set of problems we care about.
While these results are extremely exciting, as they demonstrate that deep learning models are capable of non-trivial mathematical reasoning when interacting with a formal system, we are still very far from best-student performance on these competitions, only occasionally, rather than consistently, closing challenging olympiad problems. We hope nonetheless that our work will motivate research in this domain, in particular towards the IMO Grand Challenge and that the statement curriculum learning methodology we propose will help accelerate progress in automated reasoning in general.