如何从网站上抓取测验问题?
How to scrape quiz question from website?
从 website 抓取课程内容
但无法得到准确的结果,太多嘈杂的代码。
(使用F12 chorme devtools,困惑..)如何简单地完成它?
我的代码:
import requests,bs4
res = requests.get('https://brilliant.org/practice/computational-models-of-the-neuron/?p=2')
#check work or not
res.raise_for_status() #raise_for_status()
res.text
bs = bs4.BeautifulSoup(res.text)
bs.select('.course-quiz-content ') # or bs.select('p ') both didn't work well
add:我只想获取文本,结果如下:
[<div class="course-quiz-content">
<div class="solv-problem">
<div class="solv-content">
<div class="question-text latex">
<p><span class="image-caption center">
<img alt="" src="https://ds055uzetaobb.cloudfront.net/brioche/uploads/QjYrKg7An9-group-17.svg?height=200" srcset="https://ds055uzetaobb.cloudfront.net/brioche/uploads/QjYrKg7An9-group-17.svg?height=200 1x,https://ds055uzetaobb.cloudfront.net/brioche/uploads/QjYrKg7An9-group-17.svg?height=400 2x,https://ds055uzetaobb.cloudfront.net/brioche/uploads/QjYrKg7An9-group-17.svg?height=600 3x" style="max-height:200px;max-width:100%;"/>
</span></p>
<p>A neuron has many inputs but only one output, so it must "integrate" its inputs into one output (a single number). Recall that the inputs to a neuron are generally outputs from other neurons. What is the most natural way to represent the set of these inputs to a single neuron in an ANN?</p>
</div>...
预期结果:
一个神经元有很多输入但只有一个输出,所以它必须 "integrate" 它的输入到一个输出(一个数字)。回想一下,一个神经元的输入通常是其他神经元的输出。将这些输入集表示到 ANN 中单个神经元的最自然方式是什么?
获取结果集中每个项目的文本只需在迭代时调用 get_text(strip=True)
。
以下list comprehension
会给你一个文本列表:
[t.get_text(strip=True) for t in bs.select('.course-quiz-content')]
例子
import requests,bs4
res = requests.get('https://brilliant.org/practice/computational-models-of-the-neuron/?p=2')
bs = bs4.BeautifulSoup(res.text)
data = [t.get_text(strip=True) for t in bs.select('.course-quiz-content')]
print(data)
输出:
['A neuron has many inputs but only one output, so it must "integrate" its inputs into one output (a single number). Recall that the inputs to a neuron are generally outputs from other neurons. What is the most natural way to represent the set of these inputs to a single neuron in an ANN?',
'In our computational model of a neuron, the inputs defined by the vectorx⃗\vec{x}xare “integrated” by taking thebiasbbbplus the dot product of theinputsx⃗\vec{x}xandweightsw⃗:\vec{w}:w:w⃗⋅x⃗+b.\vec{w} \cdot \vec{x} + b.w⋅x+b.The dot product represents a "weighted sum" because it multiplies each input by a weight.A biological interpretation is that the inputs definingx⃗\vec{x}xare the outputs of other neurons, the weights definingw⃗\vec{w}ware the strengths of the connections to those neurons, and the biasbbbimpacts the threshold the computing neuron must surpass in order to fire.',
'Given the inputs, weights, and bias shown above, what is the integration of these inputs given by the weighted sumw⃗⋅x⃗+b?\vec{w} \cdot \vec{x} + b?w⋅x+b?Note:If you are unfamiliar with dot products, our wiki on thedot product in Cartesian coordinatesmight be helpful.',
'An activation function,H(v),H(v),H(v),is used to transform the integration (weighted sum) into a single output which determines whether or not the neuron would fire. For example, we might haveH(v)H(v)H(v)as the Heaviside step function, that is,H(v)={1ifv≥00ifv<0.H(v) = \begin{cases}\n1 & \mbox{if } v \ge 0 \\\n0 & \mbox{if } v \lt 0. \\\n\end{cases}H(v)={10\u200bifv≥0ifv<0.\u200bConsideringH(w⃗⋅x⃗+b),H(\vec{w} \cdot \vec{x} + b),H(w⋅x+b),how doesincreasingthe biasbbbaffect the likelihood of the neuron firing (all else equal), assuming that a111corresponds to firing?',
'WhenH(v)H(v)H(v)is the Heaviside step function, the neuron modeled byH(w⃗⋅x⃗+b)H(\vec{w} \cdot \vec{x} + b)H(w⋅x+b)fires whenw⃗⋅x⃗+b≥0.\vec{w} \cdot \vec{x} + b\ge 0.w⋅x+b≥0.The hypersurfacew⃗⋅x⃗+b=0\vec{w} \cdot \vec{x} + b = 0w⋅x+b=0is called thedecision boundary, since it divides the input vector space into two parts based on whether the input would cause the neuron to fire. This model is known as a linear classifier because this boundary is based on a linear combination of the inputs.',
'The model above shows a decision boundary for predicting college admission based on the inputx⃗=(SAT\xa0scoreGPA)\vec{x} = \begin{pmatrix}\text{SAT score} \\ \text{GPA} \end{pmatrix}x=(SAT\xa0scoreGPA\u200b)and the activation functionH(w⃗⋅x⃗+b)H(\vec{w} \cdot \vec{x} + b)H(w⋅x+b), whereH(v)H(v)H(v)is the Heaviside step function. Which of the following is a possible value for the weight vector,w⃗?\vec{w}?w?',
"So far, we’ve considered an activation functionH(v)H(v)H(v)with binary outputs, as inspired by a physical neuron. However, in ANNs, we don’t need to restrict ourselves to a binary function. Functions like the ones below avoid counterintuitive jumps and can model continuous values (e.g. a probability):The power of ANNs is illustrated by theuniversal approximation theorem, which states that ANNs using activation functions like these can modelanycontinuous function, given some general requirements about the size and layout of the ANN.We can't prove the universal approximation theorem here, but its implications are still important. No matter how complicated a situation is, a sufficiently large ANN with the appropriate parameters can model it.",
"Consider the activation functionH(v)=11+e−vH(v) = \dfrac{1}{1+e^{-v}}H(v)=1+e−v1\u200b, whereeeestands in for Euler's Number,2.71828…2.71828\ldots2.71828…H(v)H(v)H(v)is known as the sigmoid function. In our image above, we multiply our inputs by their corresponding weights and add a bias of222to getvvv. Then the value invvvis fed into the activation function to get the output of the neuron.Given the inputs, weights, and bias shown in the image above (which are the same as in an earlier question), what is the approximate output (to the nearest thousandth) from this neuron after the integrated value of the inputs is evaluated by the activation function?",
'We’ve now built up a basic computational model of neurons. While one neuron might not seem powerful, connecting many together in a clever manner can yield a highly effective learning model. This turns out to be true for ANNs, as evidenced by the universal approximation theorem.The remainder of this course focuses on the methods used to construct and train ANNs, highlighting the intuition behind the models and their applications.Let’s dive in!']
从 website 抓取课程内容 但无法得到准确的结果,太多嘈杂的代码。 (使用F12 chorme devtools,困惑..)如何简单地完成它?
我的代码:
import requests,bs4
res = requests.get('https://brilliant.org/practice/computational-models-of-the-neuron/?p=2')
#check work or not
res.raise_for_status() #raise_for_status()
res.text
bs = bs4.BeautifulSoup(res.text)
bs.select('.course-quiz-content ') # or bs.select('p ') both didn't work well
add:我只想获取文本,结果如下:
[<div class="course-quiz-content">
<div class="solv-problem">
<div class="solv-content">
<div class="question-text latex">
<p><span class="image-caption center">
<img alt="" src="https://ds055uzetaobb.cloudfront.net/brioche/uploads/QjYrKg7An9-group-17.svg?height=200" srcset="https://ds055uzetaobb.cloudfront.net/brioche/uploads/QjYrKg7An9-group-17.svg?height=200 1x,https://ds055uzetaobb.cloudfront.net/brioche/uploads/QjYrKg7An9-group-17.svg?height=400 2x,https://ds055uzetaobb.cloudfront.net/brioche/uploads/QjYrKg7An9-group-17.svg?height=600 3x" style="max-height:200px;max-width:100%;"/>
</span></p>
<p>A neuron has many inputs but only one output, so it must "integrate" its inputs into one output (a single number). Recall that the inputs to a neuron are generally outputs from other neurons. What is the most natural way to represent the set of these inputs to a single neuron in an ANN?</p>
</div>...
预期结果:
一个神经元有很多输入但只有一个输出,所以它必须 "integrate" 它的输入到一个输出(一个数字)。回想一下,一个神经元的输入通常是其他神经元的输出。将这些输入集表示到 ANN 中单个神经元的最自然方式是什么?
获取结果集中每个项目的文本只需在迭代时调用 get_text(strip=True)
。
以下list comprehension
会给你一个文本列表:
[t.get_text(strip=True) for t in bs.select('.course-quiz-content')]
例子
import requests,bs4
res = requests.get('https://brilliant.org/practice/computational-models-of-the-neuron/?p=2')
bs = bs4.BeautifulSoup(res.text)
data = [t.get_text(strip=True) for t in bs.select('.course-quiz-content')]
print(data)
输出:
['A neuron has many inputs but only one output, so it must "integrate" its inputs into one output (a single number). Recall that the inputs to a neuron are generally outputs from other neurons. What is the most natural way to represent the set of these inputs to a single neuron in an ANN?',
'In our computational model of a neuron, the inputs defined by the vectorx⃗\vec{x}xare “integrated” by taking thebiasbbbplus the dot product of theinputsx⃗\vec{x}xandweightsw⃗:\vec{w}:w:w⃗⋅x⃗+b.\vec{w} \cdot \vec{x} + b.w⋅x+b.The dot product represents a "weighted sum" because it multiplies each input by a weight.A biological interpretation is that the inputs definingx⃗\vec{x}xare the outputs of other neurons, the weights definingw⃗\vec{w}ware the strengths of the connections to those neurons, and the biasbbbimpacts the threshold the computing neuron must surpass in order to fire.',
'Given the inputs, weights, and bias shown above, what is the integration of these inputs given by the weighted sumw⃗⋅x⃗+b?\vec{w} \cdot \vec{x} + b?w⋅x+b?Note:If you are unfamiliar with dot products, our wiki on thedot product in Cartesian coordinatesmight be helpful.',
'An activation function,H(v),H(v),H(v),is used to transform the integration (weighted sum) into a single output which determines whether or not the neuron would fire. For example, we might haveH(v)H(v)H(v)as the Heaviside step function, that is,H(v)={1ifv≥00ifv<0.H(v) = \begin{cases}\n1 & \mbox{if } v \ge 0 \\\n0 & \mbox{if } v \lt 0. \\\n\end{cases}H(v)={10\u200bifv≥0ifv<0.\u200bConsideringH(w⃗⋅x⃗+b),H(\vec{w} \cdot \vec{x} + b),H(w⋅x+b),how doesincreasingthe biasbbbaffect the likelihood of the neuron firing (all else equal), assuming that a111corresponds to firing?',
'WhenH(v)H(v)H(v)is the Heaviside step function, the neuron modeled byH(w⃗⋅x⃗+b)H(\vec{w} \cdot \vec{x} + b)H(w⋅x+b)fires whenw⃗⋅x⃗+b≥0.\vec{w} \cdot \vec{x} + b\ge 0.w⋅x+b≥0.The hypersurfacew⃗⋅x⃗+b=0\vec{w} \cdot \vec{x} + b = 0w⋅x+b=0is called thedecision boundary, since it divides the input vector space into two parts based on whether the input would cause the neuron to fire. This model is known as a linear classifier because this boundary is based on a linear combination of the inputs.',
'The model above shows a decision boundary for predicting college admission based on the inputx⃗=(SAT\xa0scoreGPA)\vec{x} = \begin{pmatrix}\text{SAT score} \\ \text{GPA} \end{pmatrix}x=(SAT\xa0scoreGPA\u200b)and the activation functionH(w⃗⋅x⃗+b)H(\vec{w} \cdot \vec{x} + b)H(w⋅x+b), whereH(v)H(v)H(v)is the Heaviside step function. Which of the following is a possible value for the weight vector,w⃗?\vec{w}?w?',
"So far, we’ve considered an activation functionH(v)H(v)H(v)with binary outputs, as inspired by a physical neuron. However, in ANNs, we don’t need to restrict ourselves to a binary function. Functions like the ones below avoid counterintuitive jumps and can model continuous values (e.g. a probability):The power of ANNs is illustrated by theuniversal approximation theorem, which states that ANNs using activation functions like these can modelanycontinuous function, given some general requirements about the size and layout of the ANN.We can't prove the universal approximation theorem here, but its implications are still important. No matter how complicated a situation is, a sufficiently large ANN with the appropriate parameters can model it.",
"Consider the activation functionH(v)=11+e−vH(v) = \dfrac{1}{1+e^{-v}}H(v)=1+e−v1\u200b, whereeeestands in for Euler's Number,2.71828…2.71828\ldots2.71828…H(v)H(v)H(v)is known as the sigmoid function. In our image above, we multiply our inputs by their corresponding weights and add a bias of222to getvvv. Then the value invvvis fed into the activation function to get the output of the neuron.Given the inputs, weights, and bias shown in the image above (which are the same as in an earlier question), what is the approximate output (to the nearest thousandth) from this neuron after the integrated value of the inputs is evaluated by the activation function?",
'We’ve now built up a basic computational model of neurons. While one neuron might not seem powerful, connecting many together in a clever manner can yield a highly effective learning model. This turns out to be true for ANNs, as evidenced by the universal approximation theorem.The remainder of this course focuses on the methods used to construct and train ANNs, highlighting the intuition behind the models and their applications.Let’s dive in!']