Search is not available for this dataset
url
stringlengths 36
386
| fetch_time
int64 1,368,856,729B
1,726,893,809B
| content_mime_type
stringclasses 1
value | warc_filename
stringlengths 108
138
| warc_record_offset
int64 4.49M
1.03B
| warc_record_length
int64 1.31k
88.5k
| text
stringlengths 191
46k
| token_count
int64 70
19.8k
| char_count
int64 191
46k
| metadata
stringlengths 439
443
| score
float64 3.5
4.97
| int_score
int64 4
5
| crawl
stringclasses 74
values | snapshot_type
stringclasses 2
values | language
stringclasses 1
value | language_score
float64 0.1
1
| prefix
stringlengths 90
5.28k
| target
stringlengths 1
25.3k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://math.stackexchange.com/questions/4591726/fracab4-fracbc4-fraccd4-fracde4-fracea4
| 1,721,677,653,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-30/segments/1720763517915.15/warc/CC-MAIN-20240722190551-20240722220551-00176.warc.gz
| 318,427,534
| 37,511
|
# $(\frac{a}{b})^4+(\frac{b}{c})^4+(\frac{c}{d})^4+(\frac{d}{e})^4+(\frac{e}{a})^4\ge\frac{b}{a}+\frac{c}{b}+\frac{d}{c}+\frac{e}{d}+\frac{a}{e}$
How exactly do I solve this problem? (Source: 1984 British Math Olympiad #3 part II)
$$\begin{equation*} \bigl(\frac{a}{b}\bigr)^4 + \bigl(\frac{b}{c}\bigr)^4 + \bigl(\frac{c}{d}\bigr)^4 + \bigl(\frac{d}{e}\bigr)^4 + \bigl(\frac{e}{a}\bigr)^4 \ge \frac{b}{a} + \frac{c}{b} + \frac{d}{c} + \frac{e}{d} + \frac{a}{e} \end{equation*}$$
There's not really a clear-cut way to use AM-GM on this problem. I've been thinking of maybe using the Power Mean Inequality, but I don't exactly see a way to do that. Maybe we could use harmonic mean for the RHS?
• someone please explain why this is closed. I think I have adequately explained some strategies that I've tried. I believe I've provided enough context. Commented Dec 10, 2022 at 19:54
• I'm kinda new around here, but I was also surprised to see it closed. Also I found the accepted solution to be very nice. Commented Dec 10, 2022 at 21:48
Applying the AM-GM $$LHS - \bigl(\frac{e}{a}\bigr)^4 = \bigl(\frac{a}{b}\bigr)^4 + \bigl(\frac{b}{c}\bigr)^4 + \bigl(\frac{c}{d}\bigr)^4 + \bigl(\frac{d}{e}\bigr)^4 \ge4 \cdot \frac{a}{b} \cdot \frac{b}{c} \cdot \frac{c}{d} \cdot\frac{d}{e} = 4\cdot\frac{a}{e}$$ Do the same thing for these 4 others terms, and make the sum $$5 LHS - LHS \ge 4 RHS$$ $$\Longleftrightarrow LHS \ge RHS$$ The equality occurs when $$a=b=c=d=e$$
• I think in the end you should have $5LHS-LHS\geq 4RHS$, since you repeat the procedure 5 times, not 4. Then everything works. :) Commented Dec 5, 2022 at 9:33
• @Freshman'sDream You're right, I just corrected this typo. Thanks!
– NN2
Commented Dec 5, 2022 at 9:34
• ohhhhh ok thanks! Commented Dec 9, 2022 at 23:08
NN2 gave a simple and very elegamt proof. I tried another way.
What is the minumum of the function $$f(x_1,x_2,x_3,x_4,x_5)=\sum_{i=1}^{5}(x_i^4-x_i^{-1})$$ with domain $$\Bbb{R}^{5+}$$, subject to the constraint equation $$x_1x_2x_3x_4x_5=1$$?
The system of a Lagrange multplier $$\lambda$$ gives the equations $$4x_i^3+x_i^{-2}=\lambda x_i^{-1}$$ for all $$i=1,2,3,4,5$$. From these equations we have $$4x_ix_j(x_i^4-x_j^4)=x_i-x_j$$ for all $$i,j.$$ I am stuck. Any ideas?
| 874
| 2,261
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.90625
| 4
|
CC-MAIN-2024-30
|
latest
|
en
| 0.820644
|
# $(\frac{a}{b})^4+(\frac{b}{c})^4+(\frac{c}{d})^4+(\frac{d}{e})^4+(\frac{e}{a})^4\ge\frac{b}{a}+\frac{c}{b}+\frac{d}{c}+\frac{e}{d}+\frac{a}{e}$
How exactly do I solve this problem? (Source: 1984 British Math Olympiad #3 part II)
$$\begin{equation*} \bigl(\frac{a}{b}\bigr)^4 + \bigl(\frac{b}{c}\bigr)^4 + \bigl(\frac{c}{d}\bigr)^4 + \bigl(\frac{d}{e}\bigr)^4 + \bigl(\frac{e}{a}\bigr)^4 \ge \frac{b}{a} + \frac{c}{b} + \frac{d}{c} + \frac{e}{d} + \frac{a}{e} \end{equation*}$$
There's not really a clear-cut way to use AM-GM on this problem. I've been thinking of maybe using the Power Mean Inequality, but I don't exactly see a way to do that. Maybe we could use harmonic mean for the RHS? • someone please explain why this is closed. I think I have adequately explained some strategies that I've tried. I believe I've provided enough context. Commented Dec 10, 2022 at 19:54
• I'm kinda new around here, but I was also surprised to see it closed. Also I found the accepted solution to be very nice. Commented Dec 10, 2022 at 21:48
Applying the AM-GM $$LHS - \bigl(\frac{e}{a}\bigr)^4 = \bigl(\frac{a}{b}\bigr)^4 + \bigl(\frac{b}{c}\bigr)^4 + \bigl(\frac{c}{d}\bigr)^4 + \bigl(\frac{d}{e}\bigr)^4 \ge4 \cdot \frac{a}{b} \cdot \frac{b}{c} \cdot \frac{c}{d} \cdot\frac{d}{e} = 4\cdot\frac{a}{e}$$ Do the same thing for these 4 others terms, and make the sum $$5 LHS - LHS \ge 4 RHS$$ $$\Longleftrightarrow LHS \ge RHS$$ The equality occurs when $$a=b=c=d=e$$
• I think in the end you should have $5LHS-LHS\geq 4RHS$, since you repeat the procedure 5 times, not 4. Then everything works. :) Commented Dec 5, 2022 at 9:33
• @Freshman'sDream You're right, I just corrected this typo. Thanks! – NN2
Commented Dec 5, 2022 at 9:34
• ohhhhh ok thanks! Commented Dec 9, 2022 at 23:08
NN2 gave a simple and very elegamt proof. I tried another way. What is the minumum of the function $$f(x_1,x_2,x_3,x_4,x_5)=\sum_{i=1}^{5}(x_i^4-x_i^{-1})$$ with domain $$\Bbb{R}^{5+}$$, subject to the constraint equation $$x_1x_2x_3x_4x_5=1$$? The system of a Lagrange multplier $$\lambda$$ gives the equations $$4x_i^3+x_i^{-2}=\lambda x_i^{-1}$$ for all $$i=1,2,3,4,5$$.
|
From these equations we have $$4x_ix_j(x_i^4-x_j^4)=x_i-x_j$$ for all $$i,j.$$ I am stuck.
|
https://math.stackexchange.com/questions/767888/math-for-future-value-of-growing-annuity
| 1,597,311,740,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2020-34/segments/1596439738964.20/warc/CC-MAIN-20200813073451-20200813103451-00073.warc.gz
| 395,301,204
| 34,016
|
Math for Future Value of Growing Annuity
Am I working this out correctly? I need to verify that my code is correct...
$$1000 \cdot \left(\frac{(1 + 0.1 / 12)^{40 * 12} - (1 + 0.06 / 12)^{40 * 12}}{(0.1 / 12) - (0.06 / 12)}\right)$$
Something like this:
53.700663174244 - 10.957453671655 ( = 42.7432095026 )
/
0.0083333333333333 - 0.005 ( = 0.00333333333 )
*
1000
=
12822 962.8636
ps. could someone please help me with the tag selection * blush*
EDIT: Sorry I know this is a mouthful, but if the math don't add up the code can't add up plus I'm actually a designer... not equal to programmer or mathematician. I'm a creative logician :)
Below is part A which must be added (summed) to part B (original question).
A: $$Future Value (FV) of Lumpsum = PV \cdot (1+i/12)^{b*12}$$
B:
$$FV of Growing Annuity = R1 \cdot \left(\frac{(1 + i / 12)^{b * 12} - (1 + g / 12)^{b * 12}}{(i / 12) - (g / 12)}\right)$$
• Current savings for retirement (Rands) = PV
• Rate of return = i/100
• Retirement age (years) – Current age (years) = b
• Current monthly contribution towards retirement (Rands) = R1
• 6/100 (Annual Growth rate of annuities) = g
This is all I have to offer except for the more complicated formula to work out the rest of "Savings for Retirement", but if my example B is correct then the B they gave me is wrong and it's driving me nuts because I'm also having trouble with:
C: $$PV of an Growing Annuity = \left(\frac{R2 \cdot(1 + g / 12)^{b * 12}}{(i / 12) - (g / 12)}\right) \cdot \left(1- \left( \frac{(1 + g / 12)^{b * 12}}{(1 + i / 12)^{n * 12}}\right)\right)$$
• Rate of return = i/100
• Retirement age (years) – Current age (years) = b
• 95 (Assumed age of death) - Retirement age (years) = n
• Monthly income need at retirement (Rands) = R2
• 6/100 (Annual Growth rate of annuities) = g
Which then must be: $$C-(A+B)$$ And finally, let me just give it all...
D: $$FV of Growing Annuity = R3 \cdot \left(\frac{((1 + i / 12)^{b * 12} - (1 + g / 12)^{b * 12} )}{(i / 12) - (g / 12)}\right)$$
• Answer of C-(A + B) = FV of Growing Annuity
• Rate of return = i/100
• Retirement age (years) – Current age (years) = b
• 6/100 (Annual Growth rate of annuities) = g
• You should state what problem you are trying to solve. It appears you are starting with a deposit of 1000 that draws some amount of interest for some time, but what the subtractions mean I can't guess. I think the first term is $10\%$ annual interest compounded monthly for 40 years. Then you should write it mathematically-we don't necessarily know what the arguments for Math.Pow are. – Ross Millikan Apr 24 '14 at 21:05
• To elaborate on what @RossMillikan meant, you gave a series of numbers and asked "Is this correct?" without specifying what those numbers mean and the goal of the calculation. For instance, $1000(1+0.1/12)^{40*12}$ gives your total money with an initial investment of \$1000, a rate of 10%, monthly compounding and 40 years of time. Why are you then subtracting the same calculation but with a 6% rate? Why are you dividing by the difference of these rates? We can't know if what you're doing is correct if we don't know what you're trying to do. – RandomUser Apr 24 '14 at 21:27
| 1,030
| 3,194
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.0625
| 4
|
CC-MAIN-2020-34
|
latest
|
en
| 0.778724
|
Math for Future Value of Growing Annuity
Am I working this out correctly? I need to verify that my code is correct...
$$1000 \cdot \left(\frac{(1 + 0.1 / 12)^{40 * 12} - (1 + 0.06 / 12)^{40 * 12}}{(0.1 / 12) - (0.06 / 12)}\right)$$
Something like this:
53.700663174244 - 10.957453671655 ( = 42.7432095026 )
/
0.0083333333333333 - 0.005 ( = 0.00333333333 )
*
1000
=
12822 962.8636
ps. could someone please help me with the tag selection * blush*
EDIT: Sorry I know this is a mouthful, but if the math don't add up the code can't add up plus I'm actually a designer... not equal to programmer or mathematician. I'm a creative logician :)
Below is part A which must be added (summed) to part B (original question).
|
A: $$Future Value (FV) of Lumpsum = PV \cdot (1+i/12)^{b*12}$$
B:
$$FV of Growing Annuity = R1 \cdot \left(\frac{(1 + i / 12)^{b * 12} - (1 + g / 12)^{b * 12}}{(i / 12) - (g / 12)}\right)$$
• Current savings for retirement (Rands) = PV
• Rate of return = i/100
• Retirement age (years) – Current age (years) = b
• Current monthly contribution towards retirement (Rands) = R1
• 6/100 (Annual Growth rate of annuities) = g
This is all I have to offer except for the more complicated formula to work out the rest of "Savings for Retirement", but if my example B is correct then the B they gave me is wrong and it's driving me nuts because I'm also having trouble with:
C: $$PV of an Growing Annuity = \left(\frac{R2 \cdot(1 + g / 12)^{b * 12}}{(i / 12) - (g / 12)}\right) \cdot \left(1- \left( \frac{(1 + g / 12)^{b * 12}}{(1 + i / 12)^{n * 12}}\right)\right)$$
• Rate of return = i/100
• Retirement age (years) – Current age (years) = b
• 95 (Assumed age of death) - Retirement age (years) = n
• Monthly income need at retirement (Rands) = R2
• 6/100 (Annual Growth rate of annuities) = g
Which then must be: $$C-(A+B)$$ And finally, let me just give it all...
D: $$FV of Growing Annuity = R3 \cdot \left(\frac{((1 + i / 12)^{b * 12} - (1 + g / 12)^{b * 12} )}{(i / 12) - (g / 12)}\right)$$
• Answer of C-(A + B) = FV of Growing Annuity
• Rate of return = i/100
• Retirement age (years) – Current age (years) = b
• 6/100 (Annual Growth rate of annuities) = g
• You should state what problem you are trying to solve.
|
https://mathematica.stackexchange.com/questions/216293/phase-portrait-for-ode-with-ivp?noredirect=1
| 1,696,284,912,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-40/segments/1695233511021.4/warc/CC-MAIN-20231002200740-20231002230740-00732.warc.gz
| 414,570,837
| 42,266
|
# Phase Portrait for ODE with IVP
I'm trying to make a phase portrait for the ODE x'' + 16x = 0, with initial conditions x[0]=-1 & x'[0]=0. I know how to solve the ODE and find the integration constants; the solution comes out to be x(t) = -cos(4t) and x'(t) = 4sin(4t). But I don't know how to make a phase portrait out of it. I've looked at this link Plotting a Phase Portrait but I couldn't replicate mine based off of it.
Phase portrait for any second order autonomous ODE can be found as follows.
Convert the ODE to state space. This results in 2 first order ODE's. Then call StreamPlot with these 2 equations.
Let the state variables be $$x_1=x,x_2=x'(t)$$, then taking derivatives w.r.t time gives $$x'{_1}=x_2,x'{_2}=x''(t)=-16 x_1$$. Now, using StreamPlot gives
StreamPlot[{x2, -16 x1}, {x1, -2, 2}, {x2, -2, 2}]
To see the line that passes through the initial conditions $$x_1(0)=1,x_2(0)=0.1$$, add the option StreamPoints
StreamPlot[{x2, -16 x1}, {x1, -2, 2}, {x2, -5, 5},
StreamPoints -> {{{{1, .1}, Red}, Automatic}}]
To verify the above is the correct phase plot, you can do
ClearAll[x, t]
ode = x''[t] + 16 x[t] == 0;
ic = {x[0] == 1, x'[0] == 1/10};
sol = x[t] /. First@(DSolve[{ode, ic}, x[t], t]);
ParametricPlot[Evaluate[{sol, D[sol, t]}], {t, 0, 3}, PlotStyle -> Red]
The advatage of phase plot, is that one does not have to solve the ODE first (so it works for nonlinear hard to solve ODE's).
All what you have to do is convert the ODE to state space and use function like StreamPlot
If you want to automate the part of converting the ODE to state space, you can also use Mathematica for that. Simply use StateSpaceModel and just read of the equations.
eq = x''[t] + 16 x[t] == 0;
ss = StateSpaceModel[{eq}, {{x[t], 0}, {x'[t], 0}}, {}, {x[t]}, t]
The above shows the A matrix in $$x'=Ax$$. So first row reads $$x_1'(t)=x_2$$ and second row reads $$x'_2(t)=-16 x_1$$
The following can be done to automate plotting StreamPlot directly from the state space ss result
A = First@Normal[ss];
vars = {x1, x2}; (*state space variables*)
eqs = A . vars;
StreamPlot[eqs, {x1, -2, 2}, {x2, -5, 5},
StreamPoints -> {{{{1, .1}, Red}, Automatic}}]
• Can you method plot y''[x]+2 y'[x]+3 y[x]==2 x?
– yode
Mar 27, 2022 at 8:59
• @yode Phase portrait are used for homogeneous ode's. Systems of the form $x'=A x$ and not $x'=A x + u$. Since it shows the behaviour of the system itself, independent of any forcing functions (the stuff on the RHS). This behavior is given by phase portrait diagram. The reason is, it is only the $A$ matrix eigenvalues and eigenvectors that determines this behaviour, and $A$ depends only on the system itself, without any external input being there. Mar 27, 2022 at 14:08
• Can we plot your ss in MMA directly?
– yode
Mar 29, 2022 at 10:59
• @yoda Yes. I've updated the above with what I think you are asking for. Hope this helps. Mar 29, 2022 at 15:23
EquationTrekker works for me, but if you are not interested in looking at a range of solutions, it might be easier to just do it with ParametricPlot
x[t_] := -Cos[4 t]
ParametricPlot[{x[t], x'[t]} // Evaluate, {t, 0, 2 π},
Axes -> False, PlotLabel -> PhaseTrajectory, Frame -> True,
FrameLabel -> {x[t], x'[t]}, GridLines -> Automatic]
• What version is this on, Bill? Someone in the QA that OP links to says EquationTrekker is broken for them on v11.0 Mar 15, 2020 at 6:04
• This plot is from ParametricPlot, not EquationTrekker, but in v12.0 EquationTrekker gives me plots, although I do get PropertyValue errors. Mar 15, 2020 at 7:40
| 1,140
| 3,557
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.65625
| 4
|
CC-MAIN-2023-40
|
longest
|
en
| 0.844324
|
# Phase Portrait for ODE with IVP
I'm trying to make a phase portrait for the ODE x'' + 16x = 0, with initial conditions x[0]=-1 & x'[0]=0. I know how to solve the ODE and find the integration constants; the solution comes out to be x(t) = -cos(4t) and x'(t) = 4sin(4t). But I don't know how to make a phase portrait out of it. I've looked at this link Plotting a Phase Portrait but I couldn't replicate mine based off of it. Phase portrait for any second order autonomous ODE can be found as follows. Convert the ODE to state space. This results in 2 first order ODE's. Then call StreamPlot with these 2 equations. Let the state variables be $$x_1=x,x_2=x'(t)$$, then taking derivatives w.r.t time gives $$x'{_1}=x_2,x'{_2}=x''(t)=-16 x_1$$. Now, using StreamPlot gives
StreamPlot[{x2, -16 x1}, {x1, -2, 2}, {x2, -2, 2}]
To see the line that passes through the initial conditions $$x_1(0)=1,x_2(0)=0.1$$, add the option StreamPoints
StreamPlot[{x2, -16 x1}, {x1, -2, 2}, {x2, -5, 5},
StreamPoints -> {{{{1, .1}, Red}, Automatic}}]
To verify the above is the correct phase plot, you can do
ClearAll[x, t]
ode = x''[t] + 16 x[t] == 0;
ic = {x[0] == 1, x'[0] == 1/10};
sol = x[t] /. First@(DSolve[{ode, ic}, x[t], t]);
ParametricPlot[Evaluate[{sol, D[sol, t]}], {t, 0, 3}, PlotStyle -> Red]
The advatage of phase plot, is that one does not have to solve the ODE first (so it works for nonlinear hard to solve ODE's). All what you have to do is convert the ODE to state space and use function like StreamPlot
If you want to automate the part of converting the ODE to state space, you can also use Mathematica for that. Simply use StateSpaceModel and just read of the equations. eq = x''[t] + 16 x[t] == 0;
ss = StateSpaceModel[{eq}, {{x[t], 0}, {x'[t], 0}}, {}, {x[t]}, t]
The above shows the A matrix in $$x'=Ax$$. So first row reads $$x_1'(t)=x_2$$ and second row reads $$x'_2(t)=-16 x_1$$
The following can be done to automate plotting StreamPlot directly from the state space ss result
A = First@Normal[ss];
vars = {x1, x2}; (*state space variables*)
eqs = A . vars;
StreamPlot[eqs, {x1, -2, 2}, {x2, -5, 5},
StreamPoints -> {{{{1, .1}, Red}, Automatic}}]
• Can you method plot y''[x]+2 y'[x]+3 y[x]==2 x? – yode
Mar 27, 2022 at 8:59
• @yode Phase portrait are used for homogeneous ode's.
|
Systems of the form $x'=A x$ and not $x'=A x + u$.
|
http://math.stackexchange.com/questions/tagged/vector-spaces+vector-analysis
| 1,398,348,055,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00167-ip-10-147-4-33.ec2.internal.warc.gz
| 210,644,211
| 24,883
|
# Tagged Questions
23 views
### Why should we expect the divergence operator to be invariant under transformations?
A lot of the time with vector calculus identities, something that seems magical at first ends up having a nice and unique proof. For the divergence operator, one can prove that it's invariant under a ...
3 views
### Gentle introduction to discrete vector field [closed]
I am looking for a gentle introduction to discrete vector field. Thanks in advance.
26 views
### Vectors and Planes
Let there be 2 planes: $x-y+z=2, 2x-y-z=1$ Find the equation of the line of the intersection of the two planes, as well as that of another plane which goes through that line. Attempt to solve: the ...
25 views
63 views
### Extrema of a vector norm under two inner-product constraints.
If $\langle\vec{A},\vec{V}\rangle=1\; ,\; \langle\vec{B},\vec{V}\rangle=c$, then: \begin{align} max\left \| \vec{V} \right \|_{1}=?\;\;\;min\left \| \vec{V} \right \|_{1}=? \end{align} Consider the ...
129 views
### How to rotate two vectors (2d), where their angle is larger than 180.
The rotation matrix $$\begin{bmatrix} \cos\theta & -\sin \theta\\ \sin\theta & \cos\theta \end{bmatrix}$$ cannot process the case that the angle between two vectors is larger than $180$ ...
53 views
### Is this statement about vectors true?
If vectors $A$ and $B$ are parallel, then, $|A-B| = |A| - |B|$ Is the above statement true?
822 views
### Collinearity of three points of vectors
Show that the three vectors $$A\_ = 2i + j - 3k , B\_ = i - 4k , C\_ = 4i + 3j -k$$ are linearly dependent. Determine a relation between them and hence show that the terminal points are collinear. ...
92 views
135 views
### Vectors transformation
Give a necessary and sufficient condition ("if and only if") for when three vectors $a, b, c, \in \mathbb{R^2}$ can be transformed to unit length vectors by a single affine transformation. This is ...
56 views
### To show the inequality $\|A\|\geq\max\{\|u_1\|,\ldots,\|u_q\|,\|\vec{v_1}\|,\ldots,\|\vec{v_q}\|\}$
Let $A\in$ $\mathbb{C}^{p\times q}$ with column $u_1,\ldots,u_q$ and rows $\vec{v_1},\ldots,\vec{v_p}$. show that $$\|A\|\geq\max\{\|u_1\|,\ldots,\|u_q\|,\|\vec{v_1}\|,\ldots,\|\vec{v_q}\|\}$$ and ...
160 views
### Find the necessary and sufficient conditions on $A$ such that $\|T(\vec{x})\|=|\det A|\cdot\|\vec{x}\|$ for all $\vec{x}$.
Consider the mapping $T:\mathbb{R}^n\mapsto\mathbb{R}^n$ defined by $T(\vec{x})=A\vec{x}$ where $A$ is a $n\times n$ matrix. Find the necessary and sufficient conditions on $A$ such that ...
58 views
### Dot products of three or more vectors
Can't we construct a mapping from $V^3(R^1)$ to $R$ such that $a.b.c = a_{x}b_{x}c_{x}+a_{y}b_{y}c_{y}+a_{z}b_{z}c_{z}$ (a,b,c are vectors in $V^3(R^1)$ ) and more generally $a^n$ , $a.b.c.d.e...$ ...
365 views
50 views
| 900
| 2,846
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.84375
| 4
|
CC-MAIN-2014-15
|
longest
|
en
| 0.833623
|
# Tagged Questions
23 views
### Why should we expect the divergence operator to be invariant under transformations? A lot of the time with vector calculus identities, something that seems magical at first ends up having a nice and unique proof. For the divergence operator, one can prove that it's invariant under a ...
3 views
### Gentle introduction to discrete vector field [closed]
I am looking for a gentle introduction to discrete vector field. Thanks in advance. 26 views
### Vectors and Planes
Let there be 2 planes: $x-y+z=2, 2x-y-z=1$ Find the equation of the line of the intersection of the two planes, as well as that of another plane which goes through that line. Attempt to solve: the ...
25 views
63 views
### Extrema of a vector norm under two inner-product constraints. If $\langle\vec{A},\vec{V}\rangle=1\; ,\; \langle\vec{B},\vec{V}\rangle=c$, then: \begin{align} max\left \| \vec{V} \right \|_{1}=?\;\;\;min\left \| \vec{V} \right \|_{1}=? \end{align} Consider the ...
129 views
### How to rotate two vectors (2d), where their angle is larger than 180. The rotation matrix $$\begin{bmatrix} \cos\theta & -\sin \theta\\ \sin\theta & \cos\theta \end{bmatrix}$$ cannot process the case that the angle between two vectors is larger than $180$ ...
53 views
### Is this statement about vectors true? If vectors $A$ and $B$ are parallel, then, $|A-B| = |A| - |B|$ Is the above statement true? 822 views
### Collinearity of three points of vectors
Show that the three vectors $$A\_ = 2i + j - 3k , B\_ = i - 4k , C\_ = 4i + 3j -k$$ are linearly dependent. Determine a relation between them and hence show that the terminal points are collinear. ...
92 views
135 views
### Vectors transformation
Give a necessary and sufficient condition ("if and only if") for when three vectors $a, b, c, \in \mathbb{R^2}$ can be transformed to unit length vectors by a single affine transformation. This is ...
56 views
### To show the inequality $\|A\|\geq\max\{\|u_1\|,\ldots,\|u_q\|,\|\vec{v_1}\|,\ldots,\|\vec{v_q}\|\}$
Let $A\in$ $\mathbb{C}^{p\times q}$ with column $u_1,\ldots,u_q$ and rows $\vec{v_1},\ldots,\vec{v_p}$. show that $$\|A\|\geq\max\{\|u_1\|,\ldots,\|u_q\|,\|\vec{v_1}\|,\ldots,\|\vec{v_q}\|\}$$ and ...
160 views
### Find the necessary and sufficient conditions on $A$ such that $\|T(\vec{x})\|=|\det A|\cdot\|\vec{x}\|$ for all $\vec{x}$. Consider the mapping $T:\mathbb{R}^n\mapsto\mathbb{R}^n$ defined by $T(\vec{x})=A\vec{x}$ where $A$ is a $n\times n$ matrix.
|
Find the necessary and sufficient conditions on $A$ such that ...
58 views
### Dot products of three or more vectors
Can't we construct a mapping from $V^3(R^1)$ to $R$ such that $a.b.c = a_{x}b_{x}c_{x}+a_{y}b_{y}c_{y}+a_{z}b_{z}c_{z}$ (a,b,c are vectors in $V^3(R^1)$ ) and more generally $a^n$ , $a.b.c.d.e...$ ...
365 views
50 views
|
https://quant.stackexchange.com/questions/68635/characteristics-of-factor-portfolios/68646
| 1,713,245,452,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-18/segments/1712296817043.36/warc/CC-MAIN-20240416031446-20240416061446-00402.warc.gz
| 439,703,907
| 40,053
|
# characteristics of factor portfolios
In the paper Characteristics of Factor Portfolios (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1601414), when it discusses pure factor portfolios, it says that simple style factor portfolios have zero exposure to all other style, country, and industry factors. Could someone help me understand the math for why the style factor portfolios have zero exposure to all other style, country, and industry factors?
So, for example, if we are interested in the return of a P/E factor and a P/B factor, we would gather the P/E and P/B for all of our stocks into a matrix of loadings $$B$$. $$B$$ would have two columns – one containing P/E and one containing P/B for all assets. We then regress $$R$$ (a vector containing the returns of all assets) on $$B$$. OLS regression gives us $$f= (B’B)^{-1} B’R$$ = the returns of the style factors for this particular period. The rows of $$(B’B)^{-1} B’$$ are considered to be the factor portfolios.
So, let’s go one step further and look at the loadings of the portfolio on the individual styles by multiplying the factor portfolios with the matrix of loadings. This gives $$(B’B)^{-1} B’B = I$$ - an identity matrix. Hence, the loadings of each factor portfolio are 1 against the particular style and 0 against any other style.
| 312
| 1,312
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.59375
| 4
|
CC-MAIN-2024-18
|
latest
|
en
| 0.874208
|
# characteristics of factor portfolios
In the paper Characteristics of Factor Portfolios (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1601414), when it discusses pure factor portfolios, it says that simple style factor portfolios have zero exposure to all other style, country, and industry factors. Could someone help me understand the math for why the style factor portfolios have zero exposure to all other style, country, and industry factors? So, for example, if we are interested in the return of a P/E factor and a P/B factor, we would gather the P/E and P/B for all of our stocks into a matrix of loadings $$B$$. $$B$$ would have two columns – one containing P/E and one containing P/B for all assets. We then regress $$R$$ (a vector containing the returns of all assets) on $$B$$. OLS regression gives us $$f= (B’B)^{-1} B’R$$ = the returns of the style factors for this particular period. The rows of $$(B’B)^{-1} B’$$ are considered to be the factor portfolios. So, let’s go one step further and look at the loadings of the portfolio on the individual styles by multiplying the factor portfolios with the matrix of loadings.
|
This gives $$(B’B)^{-1} B’B = I$$ - an identity matrix.
|
https://math.stackexchange.com/questions/851072/theorem-on-giuga-number/851114
| 1,561,549,511,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-26/segments/1560628000306.84/warc/CC-MAIN-20190626114215-20190626140215-00312.warc.gz
| 522,635,721
| 34,745
|
# Theorem on Giuga number
Giuga number : $n$ is a Giuga number $\iff$ For every prime factor $p$ of $n$ , $p | (\frac{n}{p}-1)$
How to prove the following theorem on Giuga numbers
$n$ is a giuga number $\iff$ $\sum_{i=1}^{n-1} i^{\phi(n)} \equiv -1 \mod {n}$
## 1 Answer
The $\Rightarrow$ part. For first, a giuga number must be squarefree, since, by assuming $p^2\mid n$, we have that $p$ divides two consecutive numbers, $\frac{n}{p}$ and $\frac{n}{p}-1$, that is clearly impossible. So we have: $$n = \prod_{i=1}^{k} p_i$$ that implies: $$\phi(n) = \prod_{i=1}^{k} (p_i-1).$$ By considering the sum $$\sum_{i=0}^{n-1}i^{\phi(n)}$$ $\pmod{p_i}$ we have that all the terms contribute with a $1$, except the multiples of $p_i$ that contribute with a zero. This gives: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv n-\frac{n}{p_i}\equiv (n-1)\pmod{p_i}\tag{1}$$ that holds for any $i\in[1,k]$. The chinese theorem now give: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv n-1\pmod{\prod_{i=1}^{k}p_i}$$ that is just: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv -1\pmod{n}$$ as claimed. For the $\Leftarrow$ part, we have that the congruence $\!\!\!\pmod{n}$ implies the congruence $\!\!\!\pmod{p_i}$, hence $(1)$ must hold, so we must have: $$\frac{n}{p_i}\equiv 1\pmod{p_i}$$ that is equivalent to $p_i\mid\left(\frac{n}{p_i}-1\right).$
| 504
| 1,311
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.15625
| 4
|
CC-MAIN-2019-26
|
latest
|
en
| 0.746232
|
# Theorem on Giuga number
Giuga number : $n$ is a Giuga number $\iff$ For every prime factor $p$ of $n$ , $p | (\frac{n}{p}-1)$
How to prove the following theorem on Giuga numbers
$n$ is a giuga number $\iff$ $\sum_{i=1}^{n-1} i^{\phi(n)} \equiv -1 \mod {n}$
## 1 Answer
The $\Rightarrow$ part. For first, a giuga number must be squarefree, since, by assuming $p^2\mid n$, we have that $p$ divides two consecutive numbers, $\frac{n}{p}$ and $\frac{n}{p}-1$, that is clearly impossible. So we have: $$n = \prod_{i=1}^{k} p_i$$ that implies: $$\phi(n) = \prod_{i=1}^{k} (p_i-1).$$ By considering the sum $$\sum_{i=0}^{n-1}i^{\phi(n)}$$ $\pmod{p_i}$ we have that all the terms contribute with a $1$, except the multiples of $p_i$ that contribute with a zero. This gives: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv n-\frac{n}{p_i}\equiv (n-1)\pmod{p_i}\tag{1}$$ that holds for any $i\in[1,k]$. The chinese theorem now give: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv n-1\pmod{\prod_{i=1}^{k}p_i}$$ that is just: $$\sum_{i=0}^{n-1}i^{\phi(n)}\equiv -1\pmod{n}$$ as claimed.
|
For the $\Leftarrow$ part, we have that the congruence $\!\!\!\pmod{n}$ implies the congruence $\!\!\!\pmod{p_i}$, hence $(1)$ must hold, so we must have: $$\frac{n}{p_i}\equiv 1\pmod{p_i}$$ that is equivalent to $p_i\mid\left(\frac{n}{p_i}-1\right).$
|
https://math.meta.stackexchange.com/questions/31929/is-there-any-stack-exchange-site-that-allows-sharing-review-of-interesting-obse/31930
| 1,620,777,485,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00461.warc.gz
| 412,254,493
| 31,820
|
# Is there any Stack Exchange site that allows sharing, review of interesting observations\results made\obtained by students in Mathematics?
I'm a 10th grader who is extremely interested in Mathematics and I frequently come across some interesting (at least, to me) results while doing some Math problem and sometimes I want to get an expert-level opinion on that result.
For example, I was recently thinking about how one would go on about defining a function that gives a graph like the one given below :
I successfully defined such a function using a combination of the floor function, ceiling function, fractional part function and the signum function. It was pretty interesting for me.
Another time, I discovered a simple derivation for the quadratic formula and once, a derivation for the compound angle identities in Trigonometry
These are some examples of when I wanted to share these and get some reviews/opinions about the results that I had obtained.
So, basically, is there a website for Mathematics like Code Review for Coding in the Stack Exchange Community?
Thanks!
PS : If you're wondering what the functions is, it's given below : $$f(x) = \text{Sign}\Bigg(\Bigg\{\dfrac{\lceil x \rceil}{2} \Bigg \} - a \Bigg) \text{, where } 0 < a < 0.5$$ $$\text{Here, }\{ x \} \text { is the fractional part function which is defined as } \{ x \} = x - \lfloor x \rfloor$$ $$\text{And Sign}(x) \text{ is the signum function, which gives the sign of the input, and } 0 \text{ in case the input is }0$$
Edit : I recently thought of a much simpler version of the function that I talk about above. It is : $$f(x) = \cos(\lfloor x \rfloor \cdot \pi)$$
• A better and clearer definition would be to not insist that it be given by a single formula and to say that $$f(x)=\begin{cases}1&\mbox{ if }2n<x\leq 2n+1\\-1&\mbox{ otherwise}.\end{cases}$$ – Matt Samuel Jun 16 '20 at 20:47
• An addition though : "Where $n \in \Bbb Z$". Actually, the reason that I insisted on a Mathematical definition of the function was so that it can be graphed using a graphing calculator and embedded in a computer program with a mathematical approach. Thanks for the suggestion! – Rajdeep Sindhu Jun 16 '20 at 20:52
• Is "check my work" question not allowed? We have a tag specified for that. – Arctic Char Jun 16 '20 at 20:55
• I am familiar with the solution-verification tag and in fact, have used it a few times too. As far as I know, this question : math.stackexchange.com/questions/3704308/… was closed till some time ago for the reason : Homework and check my work type questions not allowed. It's re-opened now though. Also, wouldn't a separate site (like Code Review for reviewing programs) be nice? – Rajdeep Sindhu Jun 16 '20 at 21:03
• I looked at the timeline and it doesn't appear that the question was ever closed. The reason is invalid in any case, because both homework and check-my-work questions are allowed. – Matt Samuel Jun 16 '20 at 21:34
• @MattSamuel I'm sorry for the misleading info. Looks like I can't recall the question which was closed for being a 'check my work' type question. Maybe (and most probably), it wasn't even on Mathematics SE. – Rajdeep Sindhu Jun 16 '20 at 21:43
• Then get rid of the first sentence of your post, @RajdeepSindhu ! – amWhy Jun 16 '20 at 23:30
| 859
| 3,289
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.96875
| 4
|
CC-MAIN-2021-21
|
latest
|
en
| 0.924377
|
# Is there any Stack Exchange site that allows sharing, review of interesting observations\results made\obtained by students in Mathematics? I'm a 10th grader who is extremely interested in Mathematics and I frequently come across some interesting (at least, to me) results while doing some Math problem and sometimes I want to get an expert-level opinion on that result. For example, I was recently thinking about how one would go on about defining a function that gives a graph like the one given below :
I successfully defined such a function using a combination of the floor function, ceiling function, fractional part function and the signum function. It was pretty interesting for me. Another time, I discovered a simple derivation for the quadratic formula and once, a derivation for the compound angle identities in Trigonometry
These are some examples of when I wanted to share these and get some reviews/opinions about the results that I had obtained. So, basically, is there a website for Mathematics like Code Review for Coding in the Stack Exchange Community? Thanks! PS : If you're wondering what the functions is, it's given below : $$f(x) = \text{Sign}\Bigg(\Bigg\{\dfrac{\lceil x \rceil}{2} \Bigg \} - a \Bigg) \text{, where } 0 < a < 0.5$$ $$\text{Here, }\{ x \} \text { is the fractional part function which is defined as } \{ x \} = x - \lfloor x \rfloor$$ $$\text{And Sign}(x) \text{ is the signum function, which gives the sign of the input, and } 0 \text{ in case the input is }0$$
Edit : I recently thought of a much simpler version of the function that I talk about above.
|
It is : $$f(x) = \cos(\lfloor x \rfloor \cdot \pi)$$
• A better and clearer definition would be to not insist that it be given by a single formula and to say that $$f(x)=\begin{cases}1&\mbox{ if }2n<x\leq 2n+1\\-1&\mbox{ otherwise}.\end{cases}$$ – Matt Samuel Jun 16 '20 at 20:47
• An addition though : "Where $n \in \Bbb Z$".
|
https://math.stackexchange.com/questions/2792471/linearization-of-system-of-odes-around-operating-point-transfer-functions-and
| 1,726,799,609,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00678.warc.gz
| 340,249,568
| 37,653
|
# Linearization of System of ODEs around Operating Point / Transfer Functions and State Space
I have this system of ODEs and I'm trying to get a linearized version of it around the "operating point" $\overline{x}_1 = 1$
$$\left\{\begin{matrix} \ddot{x_1}(t)+2\dot{x_1}(t)+2x_1^2(t)-2\dot{x_2}(t)=0 \\ 2\ddot{x_2}(t)+2\dot{x_2}(t)-2\dot{x_1}(t)=f(t) \end{matrix}\right.$$
So I define a small perturbation $\delta x_1$, $\delta x_2$ and $\delta f$ around the operating point $\overline{x}_1$, $\overline{x}_2$ and $\overline{f}$
$$\delta x_1 = x_1 - \overline{x}_1 \Rightarrow \dot{x_1} = \dot{\delta x_1} \Rightarrow \ddot{x_1} = \ddot{\delta x_1}$$
$$\delta x_2 = x_2 - \overline{x}_2 \Rightarrow \dot{x_2} = \dot{\delta x_2} \Rightarrow \ddot{x_2} = \ddot{\delta x_2}$$
$$\delta f = f - \overline{f}$$
I use Taylor polynomial to linearize $x_1^2(t)$ around $\overline{x}_1=1$ as
$$x_1^2 \approx \overline{x}_1^2 + 2\overline{x}_1 \delta x_1 = 1 + 2\delta x_1$$
I replace all in the original equations:
$$\left\{\begin{matrix}\delta\ddot{x_1}(t)+2\delta\dot{x_1}(t)+2\left [1+2\delta x_1(t) \right ] - 2 \delta \dot{x_2}(t)=0 \\ 2\delta \ddot{x_2}(t)+2\delta \dot{x_2}(t)-2\delta \dot{x_1}(t)=\overline{f}+\delta f(t) \end{matrix}\right.$$
This system is "linear", but not homogeneous, because it has constant terms $2$ and $\overline{f}$. In fact, through force balance we get that $\overline{f}=2$, so the constant terms should mathematically cancel out somehow.
How can I get rid of this constant terms? Is there another (better) way to linearize this system of ODEs around $\overline{x}_1=1$
By the way, I got this systems of ODEs from this physical system:
• How did you determine that $x_1=1$ is the operating point? You will need a non-constant $\bar f$, as with a constant one you get $x_1=0$ as equilibrium point, just from physical considerations. Note that $$\frac{d}{dt}\left[\frac12 \dot x_1(t)^2+\dot x_2(t)^2+\frac23x_1(t)^3\right]=f(t)\dot x_2(t)-2(\dot x_1(t)-\dot x_2(t))^2,$$ where the last term continuously loses energy, leading to $x_1 \to 0$. You will need a very specific $f$ to continuously replace that lost energy. Commented May 23, 2018 at 7:24
• This is a problem from a textbook. It specifically ask to linearize about $x_1=1$. Thank you Commented May 23, 2018 at 7:35
When linearising a non-linear system of the form $\dot{x} = g(x,f)$ at an operating point $\bar{x}$ and $\bar{f}$ requires that $g(\bar{x},\bar{f})=0$. Since $\bar{x}_1$ is given and $g(x,f)$ is not a function of $x_2$, then $g(\bar{x},\bar{f})=0$ only has a solution when $x_2$ is omitted from the state space vector, so $x$ only contains $x_1$, $\dot{x}_1$ and $\dot{x}_2$ and no $x_2$. So $\bar{\dot{x}}_2$ can then be a non-zero constant, which can be chosen such that $g(\bar{x},\bar{f})=0$ can be satisfied.
| 1,011
| 2,827
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.84375
| 4
|
CC-MAIN-2024-38
|
latest
|
en
| 0.709077
|
# Linearization of System of ODEs around Operating Point / Transfer Functions and State Space
I have this system of ODEs and I'm trying to get a linearized version of it around the "operating point" $\overline{x}_1 = 1$
$$\left\{\begin{matrix} \ddot{x_1}(t)+2\dot{x_1}(t)+2x_1^2(t)-2\dot{x_2}(t)=0 \\ 2\ddot{x_2}(t)+2\dot{x_2}(t)-2\dot{x_1}(t)=f(t) \end{matrix}\right.$$
So I define a small perturbation $\delta x_1$, $\delta x_2$ and $\delta f$ around the operating point $\overline{x}_1$, $\overline{x}_2$ and $\overline{f}$
$$\delta x_1 = x_1 - \overline{x}_1 \Rightarrow \dot{x_1} = \dot{\delta x_1} \Rightarrow \ddot{x_1} = \ddot{\delta x_1}$$
$$\delta x_2 = x_2 - \overline{x}_2 \Rightarrow \dot{x_2} = \dot{\delta x_2} \Rightarrow \ddot{x_2} = \ddot{\delta x_2}$$
$$\delta f = f - \overline{f}$$
I use Taylor polynomial to linearize $x_1^2(t)$ around $\overline{x}_1=1$ as
$$x_1^2 \approx \overline{x}_1^2 + 2\overline{x}_1 \delta x_1 = 1 + 2\delta x_1$$
I replace all in the original equations:
$$\left\{\begin{matrix}\delta\ddot{x_1}(t)+2\delta\dot{x_1}(t)+2\left [1+2\delta x_1(t) \right ] - 2 \delta \dot{x_2}(t)=0 \\ 2\delta \ddot{x_2}(t)+2\delta \dot{x_2}(t)-2\delta \dot{x_1}(t)=\overline{f}+\delta f(t) \end{matrix}\right.$$
This system is "linear", but not homogeneous, because it has constant terms $2$ and $\overline{f}$. In fact, through force balance we get that $\overline{f}=2$, so the constant terms should mathematically cancel out somehow. How can I get rid of this constant terms? Is there another (better) way to linearize this system of ODEs around $\overline{x}_1=1$
By the way, I got this systems of ODEs from this physical system:
• How did you determine that $x_1=1$ is the operating point? You will need a non-constant $\bar f$, as with a constant one you get $x_1=0$ as equilibrium point, just from physical considerations. Note that $$\frac{d}{dt}\left[\frac12 \dot x_1(t)^2+\dot x_2(t)^2+\frac23x_1(t)^3\right]=f(t)\dot x_2(t)-2(\dot x_1(t)-\dot x_2(t))^2,$$ where the last term continuously loses energy, leading to $x_1 \to 0$. You will need a very specific $f$ to continuously replace that lost energy. Commented May 23, 2018 at 7:24
• This is a problem from a textbook. It specifically ask to linearize about $x_1=1$. Thank you Commented May 23, 2018 at 7:35
When linearising a non-linear system of the form $\dot{x} = g(x,f)$ at an operating point $\bar{x}$ and $\bar{f}$ requires that $g(\bar{x},\bar{f})=0$. Since $\bar{x}_1$ is given and $g(x,f)$ is not a function of $x_2$, then $g(\bar{x},\bar{f})=0$ only has a solution when $x_2$ is omitted from the state space vector, so $x$ only contains $x_1$, $\dot{x}_1$ and $\dot{x}_2$ and no $x_2$.
|
So $\bar{\dot{x}}_2$ can then be a non-zero constant, which can be chosen such that $g(\bar{x},\bar{f})=0$ can be satisfied.
|
https://math.stackexchange.com/questions/633757/order-of-conjugate-of-an-element-given-the-order-of-its-conjugate?noredirect=1
| 1,627,528,506,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-31/segments/1627046153814.37/warc/CC-MAIN-20210729011903-20210729041903-00452.warc.gz
| 370,976,788
| 40,303
|
# Order of conjugate of an element given the order of its conjugate
Let $G$ is a group and $a, b \in G$. If $a$ has order $6$, then the order of $bab^{-1}$ is...
How to find this answer? Sorry for my bad question, but I need this for my study.
• Hint: conjugation is an automorphism – dani_s Jan 10 '14 at 14:27
• @dani_s: that would be too much... it is a basic question and you are proposing to look at the automorphism.. Not a good idea i guess.. – user87543 Jan 10 '14 at 15:32
$$|bab^{-1}|=k\to (bab^{-1})^k=e_G$$ and $k$ is the least positive integer. But $e_G=(bab^{-1})^k=ba^kb^{-1}$ so $a^k=e_G$ so $6\le k$. Obviously, $k\le 6$ (Why?) so $k=6$.
Two good pieces of advice are already out here that prove the problem directly, but I'd like to decompose and remix them a little.
For a group $G$ and any $g\in G$, the map $x\mapsto gxg^{-1}$ is actually a group automorphism (self-isomorphism). This is a good exercise to prove if you haven't already proven it.
Intuitively, given an isomorphism $\phi$, $\phi(G)$ looks just like $G$, and $\phi(g)$ has the same group theoretic properties as $g$. (This includes order.) This motivates you to show that $g^n=1$ iff $\phi(g)^n=1$, and so (for minimal choice of $n$) they share the same order.
Here's a slightly more general statement for $\phi$'s that aren't necessarily isomorphisms. Let $\phi:G\to H$ be a group homomorphism of finite groups. Then for each $g\in G$, the order of $\phi(g)$ divides the order of $g$. (Try to prove this!)
If $\phi$ is an isomorphism, then so is $\phi^{-1}$, and so the order of $\phi(g)$ divides the order of $g$, and the order of $\phi^{-1}(\phi(g))=g$ divides the order of $\phi(g)$, and thus they're equal.
• @Andreas It seems this question (and variants) are destined to be prototypical examples of an abstract duplicate (e.g. recall the recent question). In fact, even the comments are becoming duplicate! – Bill Dubuque Jan 12 '14 at 17:56
• @BillDubuque, an optimistic view of the fact that the comments are becoming duplicates is that we are reaching a consensus on a canonical form for answers and comments ;-) – Andreas Caranti Jan 12 '14 at 18:08
Note that $(bab^{-1})^2 = bab^{-1}bab^{-1} = ba^2b^{-1}$. Similarly $(bab^{-1})^n = ba^nb^{-1}$ for any $n$. When will $ba^nb^{-1} = 1$ using the information about $a$? Then you just have to check to see that $ba^mb^{-1} \not = 1$ for any $1 \leq m < n$.
• I still don't get it. – Yagami Jan 10 '14 at 14:58
In general, let $o(a)=n$ and $o(bab^{-1})=k$, then $(bab^{-1})^k=ba^kb^{-1}=e$, by Cancellation Law in group, we can get $a^k=e$, since $o(a)=n$, then $k \geq n$ (in fact we can get $n|k$, but in this proof $k \geq n$ is enough). Easy to see that if $k=n$ then $(bab^{-1})^n=ba^nb^{-1}=beb^{-1}=e$, hence $k=n$.
CONCLUSION: $o(a)=o(bab^{-1})$.
| 911
| 2,811
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.921875
| 4
|
CC-MAIN-2021-31
|
latest
|
en
| 0.887827
|
# Order of conjugate of an element given the order of its conjugate
Let $G$ is a group and $a, b \in G$. If $a$ has order $6$, then the order of $bab^{-1}$ is...
How to find this answer? Sorry for my bad question, but I need this for my study. • Hint: conjugation is an automorphism – dani_s Jan 10 '14 at 14:27
• @dani_s: that would be too much... it is a basic question and you are proposing to look at the automorphism.. Not a good idea i guess.. – user87543 Jan 10 '14 at 15:32
$$|bab^{-1}|=k\to (bab^{-1})^k=e_G$$ and $k$ is the least positive integer. But $e_G=(bab^{-1})^k=ba^kb^{-1}$ so $a^k=e_G$ so $6\le k$. Obviously, $k\le 6$ (Why?) so $k=6$. Two good pieces of advice are already out here that prove the problem directly, but I'd like to decompose and remix them a little. For a group $G$ and any $g\in G$, the map $x\mapsto gxg^{-1}$ is actually a group automorphism (self-isomorphism). This is a good exercise to prove if you haven't already proven it. Intuitively, given an isomorphism $\phi$, $\phi(G)$ looks just like $G$, and $\phi(g)$ has the same group theoretic properties as $g$. (This includes order.) This motivates you to show that $g^n=1$ iff $\phi(g)^n=1$, and so (for minimal choice of $n$) they share the same order. Here's a slightly more general statement for $\phi$'s that aren't necessarily isomorphisms. Let $\phi:G\to H$ be a group homomorphism of finite groups. Then for each $g\in G$, the order of $\phi(g)$ divides the order of $g$. (Try to prove this!) If $\phi$ is an isomorphism, then so is $\phi^{-1}$, and so the order of $\phi(g)$ divides the order of $g$, and the order of $\phi^{-1}(\phi(g))=g$ divides the order of $\phi(g)$, and thus they're equal. • @Andreas It seems this question (and variants) are destined to be prototypical examples of an abstract duplicate (e.g. recall the recent question). In fact, even the comments are becoming duplicate! – Bill Dubuque Jan 12 '14 at 17:56
• @BillDubuque, an optimistic view of the fact that the comments are becoming duplicates is that we are reaching a consensus on a canonical form for answers and comments ;-) – Andreas Caranti Jan 12 '14 at 18:08
Note that $(bab^{-1})^2 = bab^{-1}bab^{-1} = ba^2b^{-1}$. Similarly $(bab^{-1})^n = ba^nb^{-1}$ for any $n$. When will $ba^nb^{-1} = 1$ using the information about $a$? Then you just have to check to see that $ba^mb^{-1} \not = 1$ for any $1 \leq m < n$. • I still don't get it. – Yagami Jan 10 '14 at 14:58
In general, let $o(a)=n$ and $o(bab^{-1})=k$, then $(bab^{-1})^k=ba^kb^{-1}=e$, by Cancellation Law in group, we can get $a^k=e$, since $o(a)=n$, then $k \geq n$ (in fact we can get $n|k$, but in this proof $k \geq n$ is enough). Easy to see that if $k=n$ then $(bab^{-1})^n=ba^nb^{-1}=beb^{-1}=e$, hence $k=n$.
|
CONCLUSION: $o(a)=o(bab^{-1})$.
|
https://gamedev.stackexchange.com/questions/138165/how-can-i-move-and-rotate-an-object-in-an-infinity-or-figure-8-trajectory/138167
| 1,560,682,448,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-26/segments/1560627998100.52/warc/CC-MAIN-20190616102719-20190616124719-00534.warc.gz
| 453,195,493
| 34,997
|
# How can I move and rotate an object in an “infinity” or “figure 8” trajectory?
I know that the easiest way to move an object with the figure 8 trajectory is:
x = cos(t);
y = sin(2*t) / 2;
but I just don't know how to rotate it, lets says with a new variable r as rotation, how do I merge it into the above formula ? can anyone please advise me on what is the simplest and cheapest way/formula to move and rotate the figure 8 trajectory ?
## 1 Answer
The object should point in the direction of the derivative, which is [-sin(t), cos(2t)]. Its angle is atan2(cos(2t), -sin(t)).
Edit: OP is apparently asking how to rotate the "trajectory," not the object itself.
To rotate the figure, choose an angle, θ, in radians, that you'd like the trajectory to be rotated. The position along this rotated figure is:
x = cos(θ) * cos(t) - sin(θ) * sin(2t)/2
y = sin(θ) * cos(t) + cos(θ) * sin(2t)/2
• so how would I modify the formula to get a rotated figure of 8 ? – user1998844 Mar 3 '17 at 18:24
• That is a completely different question than the one I answered. I'll edit my answer with a solution to this question. – Drew Cummins Mar 3 '17 at 18:31
| 325
| 1,153
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.59375
| 4
|
CC-MAIN-2019-26
|
latest
|
en
| 0.921025
|
# How can I move and rotate an object in an “infinity” or “figure 8” trajectory? I know that the easiest way to move an object with the figure 8 trajectory is:
x = cos(t);
y = sin(2*t) / 2;
but I just don't know how to rotate it, lets says with a new variable r as rotation, how do I merge it into the above formula ? can anyone please advise me on what is the simplest and cheapest way/formula to move and rotate the figure 8 trajectory ? ## 1 Answer
The object should point in the direction of the derivative, which is [-sin(t), cos(2t)]. Its angle is atan2(cos(2t), -sin(t)). Edit: OP is apparently asking how to rotate the "trajectory," not the object itself. To rotate the figure, choose an angle, θ, in radians, that you'd like the trajectory to be rotated. The position along this rotated figure is:
x = cos(θ) * cos(t) - sin(θ) * sin(2t)/2
y = sin(θ) * cos(t) + cos(θ) * sin(2t)/2
• so how would I modify the formula to get a rotated figure of 8 ? – user1998844 Mar 3 '17 at 18:24
• That is a completely different question than the one I answered. I'll edit my answer with a solution to this question.
|
– Drew Cummins Mar 3 '17 at 18:31
|
https://engineering.stackexchange.com/questions/54395/how-much-force-is-needed-to-break-off-the-stick
| 1,719,331,888,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198866143.18/warc/CC-MAIN-20240625135622-20240625165622-00431.warc.gz
| 196,296,852
| 39,140
|
# How much force is needed to break off the stick
Let's consider the following figure
The grey box contains a blue stick which is fixed. The blue stick has a length of $$a+b+c$$ and two diameters $$f,h$$. The diameter $$h$$ describes the part $$b$$ of the stick. The stick is fixed in the plane but the plane is not connected to the grey box. A force $$F$$ is pushing against the withe plane like in the picture. How much force is needed to break off the stick in part $$b$$?
• How much effort have you applied to try to obtain a proposed solution? Commented Feb 26, 2023 at 20:03
• I don't have an idea how I could solve this because I had never a mechanical problem with a notch. What I also can say is that I see two different ways how this plane could move: One way would be striaght downward if the force is close to the notch or the plane is rotated if the force comes from the outer part of the plane. Commented Feb 26, 2023 at 20:26
• I just have some knowledge about bending sticks and not about stuff like in my picture. Commented Feb 26, 2023 at 20:43
• Apply your knowledge about bending sticks to try to solve the problem. We should like to see how far that takes you. Commented Feb 26, 2023 at 20:57
• Does the white plane slide against the grey plane or does it tilt ie pivot at the lower left corner? Commented Feb 26, 2023 at 21:18
## 1 Answer
We assume the distance from F to the hinge to be
$$X_F=a+b+c+d/2$$ We calculate the equivalent I of the cantilever beam, with the parallel axis. When it bends it will rotate about a point at the lower corner of the gray support, call it point A. Let's annotate the thickness of the bar, B.
$$I_{Beam} =I_{stick}+ A_{stick}*Y^2_{stick}$$ $$I_{stick}= bh^3/12$$ $$I_{Beam}=bh^3/12+bh(e+f/2)^2$$
we assume the stick will break at yield stress and ignore 2nd hardening, or if we have it we plug it.
$$\sigma_y=\frac{MC}{I_{Beam}}=\frac{(F*x)(e+f/2)}{bh^3/12+bh(e+f/2)^2}$$ $$F*X=\frac{\sigma y*(bh^3/12+bh(e+f/2)^2)}{e+f/2}$$
$$F=\frac{\sigma y*(bh^3/12+bh(e+f/2)^2)}{(e*f/2)*(a+b+c+d/2)}$$
| 611
| 2,057
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.21875
| 4
|
CC-MAIN-2024-26
|
longest
|
en
| 0.930916
|
# How much force is needed to break off the stick
Let's consider the following figure
The grey box contains a blue stick which is fixed. The blue stick has a length of $$a+b+c$$ and two diameters $$f,h$$. The diameter $$h$$ describes the part $$b$$ of the stick. The stick is fixed in the plane but the plane is not connected to the grey box. A force $$F$$ is pushing against the withe plane like in the picture. How much force is needed to break off the stick in part $$b$$? • How much effort have you applied to try to obtain a proposed solution? Commented Feb 26, 2023 at 20:03
• I don't have an idea how I could solve this because I had never a mechanical problem with a notch. What I also can say is that I see two different ways how this plane could move: One way would be striaght downward if the force is close to the notch or the plane is rotated if the force comes from the outer part of the plane. Commented Feb 26, 2023 at 20:26
• I just have some knowledge about bending sticks and not about stuff like in my picture. Commented Feb 26, 2023 at 20:43
• Apply your knowledge about bending sticks to try to solve the problem. We should like to see how far that takes you. Commented Feb 26, 2023 at 20:57
• Does the white plane slide against the grey plane or does it tilt ie pivot at the lower left corner? Commented Feb 26, 2023 at 21:18
## 1 Answer
We assume the distance from F to the hinge to be
$$X_F=a+b+c+d/2$$ We calculate the equivalent I of the cantilever beam, with the parallel axis. When it bends it will rotate about a point at the lower corner of the gray support, call it point A. Let's annotate the thickness of the bar, B. $$I_{Beam} =I_{stick}+ A_{stick}*Y^2_{stick}$$ $$I_{stick}= bh^3/12$$ $$I_{Beam}=bh^3/12+bh(e+f/2)^2$$
we assume the stick will break at yield stress and ignore 2nd hardening, or if we have it we plug it.
|
$$\sigma_y=\frac{MC}{I_{Beam}}=\frac{(F*x)(e+f/2)}{bh^3/12+bh(e+f/2)^2}$$ $$F*X=\frac{\sigma y*(bh^3/12+bh(e+f/2)^2)}{e+f/2}$$
$$F=\frac{\sigma y*(bh^3/12+bh(e+f/2)^2)}{(e*f/2)*(a+b+c+d/2)}$$
|
https://stats.stackexchange.com/questions/592820/how-can-i-find-the-expectation-value-to-this-problem
| 1,721,790,883,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-30/segments/1720763518154.91/warc/CC-MAIN-20240724014956-20240724044956-00116.warc.gz
| 475,454,993
| 38,560
|
# How can I find the expectation value to this problem?
At a wedding reception on an evening the representative of the host is taking it as an occasion to exercise and explain a classical analytic problem. specifically, he insists that he would start serving the food only when the first table, which is arranged for 12 guests to dine together, has guests born in every twelve months of the year. assume that any given guest is equally likely to be born in any of the twelve months of the year, and that new guests were arriving at every two minutes then. what is the expected waiting time of the first arriving guest before the food gets served eventually?
Since this looked like a Coupon Collector's problem variation, my initial approach was to determine the sum of the expected value of each guests of unique birth months.
X ~ FS(p) [First Success Distribution]
X = time needed until food gets served
$$E[X] = E[X1] + E[X2] + ... + E[X12]$$
$$=> E[X] = 12/12 + 12/11 + ... + 12/1$$
However, this is where i ran into problem, since I don't know how to handle the arrival at every two minutes in my equation. Should I just multiply by 2? Or am i missing something very obvious or basic trivia? Help will be appreciated.
| 279
| 1,228
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.5625
| 4
|
CC-MAIN-2024-30
|
latest
|
en
| 0.97415
|
# How can I find the expectation value to this problem? At a wedding reception on an evening the representative of the host is taking it as an occasion to exercise and explain a classical analytic problem. specifically, he insists that he would start serving the food only when the first table, which is arranged for 12 guests to dine together, has guests born in every twelve months of the year. assume that any given guest is equally likely to be born in any of the twelve months of the year, and that new guests were arriving at every two minutes then. what is the expected waiting time of the first arriving guest before the food gets served eventually? Since this looked like a Coupon Collector's problem variation, my initial approach was to determine the sum of the expected value of each guests of unique birth months.
|
X ~ FS(p) [First Success Distribution]
X = time needed until food gets served
$$E[X] = E[X1] + E[X2] + ... + E[X12]$$
$$=> E[X] = 12/12 + 12/11 + ... + 12/1$$
However, this is where i ran into problem, since I don't know how to handle the arrival at every two minutes in my equation.
|
https://math.stackexchange.com/questions/3003033/show-lim-x-to-x-0-fxx-x-0-0-when-f-mathbbr-subset-mathbbr
| 1,563,736,846,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-30/segments/1563195527196.68/warc/CC-MAIN-20190721185027-20190721211027-00476.warc.gz
| 468,784,586
| 36,310
|
# Show $\lim_{x \to x_0^+} f(x)(x-x_0) =0$ when $f(\mathbb{R}) \subset \mathbb{R}^+$ & monotone increasing.
Show $$\lim_{x \to x_0^+} f(x)(x-x_0) =0$$ when $$f(\mathbb{R}) \subset \mathbb{R}^+$$ & monotone increasing.
Try
I need to show,
$$\forall \epsilon >0, \exists \delta >0 : x \in (x_0, x_0 + \delta) \Rightarrow |f(x) (x-x_0)| < \epsilon$$
I think I could find some upper bound $$M >0$$ such that $$|f(x) (x-x_0)| \le M |x - x_0|$$.
Let $$M = f(x_0 + \epsilon)$$, and let $$\delta = \frac{\epsilon}{\max \{2M, 2 \}}$$, then clearly $$f(x) \le f(x_0 + \epsilon) = M$$
But I'm not sure $$|f(x) (x-x_0)| \le M |x - x_0|$$.
Any hint about how I should proceed?
Hint: Observe \begin{align} |f(x)(x-x_0)|\leq |f(x_0)||x-x_0| \end{align} for all $$x\leq x_0$$.
Use $$M=f(x_0+1)$$ and cosider $$\delta=\min\{\frac{1}{2},\frac{\epsilon}{2M}\}$$.
Fix $$\varepsilon>0$$. Let $$M=f(x_0+1)$$ and choose $$\delta=\mathrm{min}\{1,\frac{\varepsilon}{M}\}$$. For each $$x\in(x_0,x_0+\delta)$$, $$|f(x)|\leq M$$ since $$f$$ is strictly increasing. Thus, $$|f(x)(x-x_0)|\leq M|x-x_0|.
| 473
| 1,083
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.765625
| 4
|
CC-MAIN-2019-30
|
latest
|
en
| 0.547747
|
# Show $\lim_{x \to x_0^+} f(x)(x-x_0) =0$ when $f(\mathbb{R}) \subset \mathbb{R}^+$ & monotone increasing. Show $$\lim_{x \to x_0^+} f(x)(x-x_0) =0$$ when $$f(\mathbb{R}) \subset \mathbb{R}^+$$ & monotone increasing. Try
I need to show,
$$\forall \epsilon >0, \exists \delta >0 : x \in (x_0, x_0 + \delta) \Rightarrow |f(x) (x-x_0)| < \epsilon$$
I think I could find some upper bound $$M >0$$ such that $$|f(x) (x-x_0)| \le M |x - x_0|$$. Let $$M = f(x_0 + \epsilon)$$, and let $$\delta = \frac{\epsilon}{\max \{2M, 2 \}}$$, then clearly $$f(x) \le f(x_0 + \epsilon) = M$$
But I'm not sure $$|f(x) (x-x_0)| \le M |x - x_0|$$. Any hint about how I should proceed? Hint: Observe \begin{align} |f(x)(x-x_0)|\leq |f(x_0)||x-x_0| \end{align} for all $$x\leq x_0$$. Use $$M=f(x_0+1)$$ and cosider $$\delta=\min\{\frac{1}{2},\frac{\epsilon}{2M}\}$$. Fix $$\varepsilon>0$$. Let $$M=f(x_0+1)$$ and choose $$\delta=\mathrm{min}\{1,\frac{\varepsilon}{M}\}$$. For each $$x\in(x_0,x_0+\delta)$$, $$|f(x)|\leq M$$ since $$f$$ is strictly increasing.
|
Thus, $$|f(x)(x-x_0)|\leq M|x-x_0|.
|
https://math.stackexchange.com/questions/1762036/why-cant-you-count-up-to-aleph-null
| 1,702,151,622,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-50/segments/1700679100942.92/warc/CC-MAIN-20231209170619-20231209200619-00443.warc.gz
| 417,244,395
| 37,938
|
# Why can't you count up to aleph null?
Recently I learned about the infinite cardinal $\aleph_0$, and stumbled upon a seeming contradiction. Here are my assumptions based on what I learned:
1. $\aleph_0$ is the cardinality of the natural numbers
2. $\aleph_0$ is larger than all finite numbers, and thus cannot be reached simply by counting up from 1.
But then I started wondering: the cardinality of the set $\{1\}$ is $1$, the cardinality of the set $\{1, 2\}$ is $2$, the cardinality of the set $\{1, 2, 3\}$ is 3, and so on. So I drew the conclusion that the cardinality of the set $\{1, 2, \ldots n\}$ is $n$.
Based on this conclusion, if the cardinality of the natural numbers is $\aleph_0$, then the set of natural numbers could be denoted as $\{1, 2, \ldots \aleph_0\}$. But such a set implies that $\aleph_0$ can be reached by counting up from $1$, which contradicts my assumption #2 above.
This question has been bugging me for a while now... I'm not sure where I've made a mistake in my reasoning or if I have even used the correct mathematical terms/question title/tags to describe it, but I'd sure appreciate your help.
• Can you count to $\aleph_0$?. I am not even going to start to see if I can.,
– user328032
Apr 28, 2016 at 1:19
• It seems to me that you want this to be an ordered set, but it does not really make sense to tack on $\aleph_0$ to the end in the way that you want. Apr 28, 2016 at 1:20
• @CameronWilliams Yes, but then what would be the last element of the set? Apr 28, 2016 at 1:21
• I can count up to $\aleph_0$. Just give me $\aleph_0$ seconds added to my life and I hope I will be able to be patient enough to do this... Countable doesn't mean you can count to it, it just means it contains the whole numbers excluding all the rational decimals between them. Apr 28, 2016 at 1:22
• @Timtech That's the thing. There isn't a "last" element here. There is a maximal element, but not a last. Last implies that you can reach that element in finitely many steps. "Last" is somewhat of a colloquialism. Apr 28, 2016 at 1:22
This is a good example where intuition about a pattern breaks down; what is true of finite sets is not true of infinite sets in general. The natural numbers $\textit{cannot}$ be denoted by the set $A=\{1,2,...,\aleph_0\}$ as the set $\aleph_0$ is not a natural number. It is true that the cardinality of $A$ is $\aleph_0$ (a good exercise), but it contains more than just natural numbers.
If $\aleph_0$ were a natural number then, as you point out, we would have a contradiction. However $\aleph_0$ is the $\textit{cardinality}$ of the natural numbers, and not a natural number itself. By definition, $\aleph_0$ is the least ordinal number with which the set $\omega$ of natural numbers may be put into bijection.
• Both... In $ZFC$ $\textit{everything}$ is a set, but more explicitly, the definition of cardinal numbers I know is this: Let $A$ be a set. Then the cardinal number of $A$ is the least ordinal $\kappa$ such that there exists a bijection $f: \kappa \to A$. Now by definition, ordinals are transitive sets that are well-ordered by $"\in"$, and since cardinals are in particular ordinals, they are sets. Since $\aleph_0$ is a cardinal number, it is also a set. Apr 28, 2016 at 22:01
$$\{1,2,\ldots,\text{ an infinite list of numbers },\ldots , \aleph_0\}$$
| 950
| 3,329
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.75
| 4
|
CC-MAIN-2023-50
|
latest
|
en
| 0.947401
|
# Why can't you count up to aleph null? Recently I learned about the infinite cardinal $\aleph_0$, and stumbled upon a seeming contradiction. Here are my assumptions based on what I learned:
1. $\aleph_0$ is the cardinality of the natural numbers
2. $\aleph_0$ is larger than all finite numbers, and thus cannot be reached simply by counting up from 1. But then I started wondering: the cardinality of the set $\{1\}$ is $1$, the cardinality of the set $\{1, 2\}$ is $2$, the cardinality of the set $\{1, 2, 3\}$ is 3, and so on. So I drew the conclusion that the cardinality of the set $\{1, 2, \ldots n\}$ is $n$. Based on this conclusion, if the cardinality of the natural numbers is $\aleph_0$, then the set of natural numbers could be denoted as $\{1, 2, \ldots \aleph_0\}$. But such a set implies that $\aleph_0$ can be reached by counting up from $1$, which contradicts my assumption #2 above. This question has been bugging me for a while now... I'm not sure where I've made a mistake in my reasoning or if I have even used the correct mathematical terms/question title/tags to describe it, but I'd sure appreciate your help. • Can you count to $\aleph_0$?. I am not even going to start to see if I can.,
– user328032
Apr 28, 2016 at 1:19
• It seems to me that you want this to be an ordered set, but it does not really make sense to tack on $\aleph_0$ to the end in the way that you want. Apr 28, 2016 at 1:20
• @CameronWilliams Yes, but then what would be the last element of the set? Apr 28, 2016 at 1:21
• I can count up to $\aleph_0$. Just give me $\aleph_0$ seconds added to my life and I hope I will be able to be patient enough to do this... Countable doesn't mean you can count to it, it just means it contains the whole numbers excluding all the rational decimals between them. Apr 28, 2016 at 1:22
• @Timtech That's the thing. There isn't a "last" element here. There is a maximal element, but not a last. Last implies that you can reach that element in finitely many steps. "Last" is somewhat of a colloquialism. Apr 28, 2016 at 1:22
This is a good example where intuition about a pattern breaks down; what is true of finite sets is not true of infinite sets in general.
|
The natural numbers $\textit{cannot}$ be denoted by the set $A=\{1,2,...,\aleph_0\}$ as the set $\aleph_0$ is not a natural number.
|
https://stats.stackexchange.com/questions/591229/generating-random-variable-which-has-a-power-distribution-of-box-and-tiao-1962
| 1,709,588,368,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-10/segments/1707947476532.70/warc/CC-MAIN-20240304200958-20240304230958-00885.warc.gz
| 552,287,119
| 41,580
|
# Generating random variable which has a power distribution of Box and Tiao (1962)
Box and Tiao (Biometrika 1962) use a distribution whose density has the following form: $$f(x; \mu, \sigma, \alpha) = \omega \exp\left\{ -\frac{1}{2} \Big\vert\frac{x-\mu}{\sigma}\Big\vert^{\frac{2}{(1+\alpha)}} \right\},$$ where $$\omega^{-1} = [\Gamma(g(\alpha)]\,2^{g(\alpha)}\sigma$$ is the normalizing constant with $$g(\alpha) = \frac{3}{2} + \frac{\alpha}{2},$$ $$\sigma \gt 0,$$ and $$-1 \lt \alpha \lt 1$$.
When $$\alpha=0$$ this reduces to the normal distribution; when $$\alpha=1$$ it reduces to the double exponential (Laplace) distribution, and when $$\alpha \to -1^{+}$$ it tends to a uniform distribution.
How can I generate random numbers from this distribution for any such value of $$\alpha$$?
Box & Tiao refer to this as a "convenient class of power distributions," referencing Diananda (1949), Box (1953), and Turner (1960).
Because $$\mu$$ and $$\sigma$$ just establish a unit of measurement and the absolute value reflects values around the origin, the basic density is proportional to $$\exp(-z^p/2)$$ where $$p = 2/(1+\alpha)$$ and $$z \ge 0.$$ Changing variables to $$y = z^p$$ for $$0\lt p \lt \infty$$ changes the probability element to
$$\exp(-z^p/2)\mathrm{d}z \to \exp(-y/2) \mathrm{d}\left(y^{1/p}\right) = \frac{1}{p}y^{1/p - 1}e^{-y/2}\mathrm{d}y.$$
Since $$p = 2/(1+\alpha),$$ this is proportional to a scaled Gamma$$(1/p)$$ = Gamma$$((1+\alpha)/2)$$ density, also known as a Chi-squared$$(1+\alpha)$$ density.
Thus, to generate a value from such a distribution, undo all these transformations in reverse order:
Generate a value $$Y$$ from a Chi-squared$$(1+\alpha)$$ distribution, raise it to the $$2/(1+\alpha)$$ power, randomly negate it (with probability $$1/2$$), multiply by $$\sigma,$$ and add $$\mu.$$
This R code exhibits one such implementation. n is the number of independent values to draw.
rf <- function(n, mu, sigma, alpha) {
y <- rchisq(n, 1 + alpha) # A chi-squared variate
u <- sample(c(-1,1), n, replace = TRUE) # Random sign change
y^((1 + alpha)/2) * u * sigma + mu
}
Here are some examples of values generated in this fashion (100,000 of each) along with graphs of $$f.$$
Generating Chi-squared variates with parameter $$1+\alpha$$ near zero is problematic. You can see this code works for $$1+\alpha = 0.1$$ (bottom left), but watch out when it gets much smaller than this:
The spike and gap in the middle should not be there.
The problem lies with floating point arithmetic: even double precision does not suffice. By this point, though, the uniform distribution looks like a good approximation.
### Appendix
This R code produced the plots. It uses the showtext library to access a Google font for the axis numbers and labels. Few of these fonts, if any, support Greek or math characters, so I had to use the default font for the plot titles (using mtext). Otherwise, everything is done with the base R plotting functions hist and curve. Don't be concerned about the relatively large simulation size: the total computation time is far less than one second to generate these 400,000 variates.
library(showtext)
showtext_auto()
#
# Density calculation.
#
f <- function(x, mu, sigma, alpha)
exp(-1/2 * abs((x - mu) / sigma) ^ (2 / (1 + alpha)))
C <- function(mu, sigma, alpha, ...)
integrate(\(x) f(x, mu, sigma, alpha), -Inf, Inf, ...)\$value
#
# Specify the distributions to plot.
#
Parameters <- list(list(mu = 0, sigma = 1, alpha = 0),
list(mu = 10, sigma = 2, alpha = 1/2),
list(mu = 0, sigma = 3, alpha = -0.9),
list(mu = 0, sigma = 4, alpha = 0.99))
#
# Generate the samples and plot summaries of them.
#
n.sim <- 1e5 # Sample size per plot
set.seed(17) # For reproducibility
pars <- par(mfrow = c(2, 2), mai = c(1/2, 3/4, 3/8, 1/8)) # Shrink the margins
for (parameters in Parameters)
with(parameters, {
x <- rf(n.sim, mu, sigma, alpha)
hist(x, freq = FALSE, breaks = 100, family = "Informal",
xlab = "", main = "", col = gray(0.9), border = gray(0.7))
mtext(bquote(list(mu==.(mu), sigma==.(sigma), alpha==.(alpha))),
cex = 1.25, side = 3, line = 0)
omega <- 1 / C(mu, sigma, alpha) # Compute the normalizing constant
curve(omega * f(x, mu, sigma, alpha), add = TRUE, lwd = 2, col = "Red")
})
par(pars)
• That's some clean code...
– Zen
Oct 5, 2022 at 17:20
• Nice and very fastly delivered:) answer.
– Yves
Oct 5, 2022 at 17:32
• Beautiful solution. Thank you! Oct 6, 2022 at 18:19
• @whuber: Can you please show us how you generated the lovely plots? Oct 6, 2022 at 18:26
• @user67724 Done.
– whuber
Oct 6, 2022 at 19:05
| 1,414
| 4,604
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.703125
| 4
|
CC-MAIN-2024-10
|
latest
|
en
| 0.750842
|
# Generating random variable which has a power distribution of Box and Tiao (1962)
Box and Tiao (Biometrika 1962) use a distribution whose density has the following form: $$f(x; \mu, \sigma, \alpha) = \omega \exp\left\{ -\frac{1}{2} \Big\vert\frac{x-\mu}{\sigma}\Big\vert^{\frac{2}{(1+\alpha)}} \right\},$$ where $$\omega^{-1} = [\Gamma(g(\alpha)]\,2^{g(\alpha)}\sigma$$ is the normalizing constant with $$g(\alpha) = \frac{3}{2} + \frac{\alpha}{2},$$ $$\sigma \gt 0,$$ and $$-1 \lt \alpha \lt 1$$. When $$\alpha=0$$ this reduces to the normal distribution; when $$\alpha=1$$ it reduces to the double exponential (Laplace) distribution, and when $$\alpha \to -1^{+}$$ it tends to a uniform distribution. How can I generate random numbers from this distribution for any such value of $$\alpha$$? Box & Tiao refer to this as a "convenient class of power distributions," referencing Diananda (1949), Box (1953), and Turner (1960). Because $$\mu$$ and $$\sigma$$ just establish a unit of measurement and the absolute value reflects values around the origin, the basic density is proportional to $$\exp(-z^p/2)$$ where $$p = 2/(1+\alpha)$$ and $$z \ge 0.$$ Changing variables to $$y = z^p$$ for $$0\lt p \lt \infty$$ changes the probability element to
$$\exp(-z^p/2)\mathrm{d}z \to \exp(-y/2) \mathrm{d}\left(y^{1/p}\right) = \frac{1}{p}y^{1/p - 1}e^{-y/2}\mathrm{d}y.$$
Since $$p = 2/(1+\alpha),$$ this is proportional to a scaled Gamma$$(1/p)$$ = Gamma$$((1+\alpha)/2)$$ density, also known as a Chi-squared$$(1+\alpha)$$ density. Thus, to generate a value from such a distribution, undo all these transformations in reverse order:
Generate a value $$Y$$ from a Chi-squared$$(1+\alpha)$$ distribution, raise it to the $$2/(1+\alpha)$$ power, randomly negate it (with probability $$1/2$$), multiply by $$\sigma,$$ and add $$\mu.$$
This R code exhibits one such implementation. n is the number of independent values to draw. rf <- function(n, mu, sigma, alpha) {
y <- rchisq(n, 1 + alpha) # A chi-squared variate
u <- sample(c(-1,1), n, replace = TRUE) # Random sign change
y^((1 + alpha)/2) * u * sigma + mu
}
Here are some examples of values generated in this fashion (100,000 of each) along with graphs of $$f.$$
Generating Chi-squared variates with parameter $$1+\alpha$$ near zero is problematic.
|
You can see this code works for $$1+\alpha = 0.1$$ (bottom left), but watch out when it gets much smaller than this:
The spike and gap in the middle should not be there.
|
https://math.stackexchange.com/questions/1184338/gibbs-phenomenon-and-fourier-series
| 1,571,861,776,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-43/segments/1570987836295.98/warc/CC-MAIN-20191023201520-20191023225020-00216.warc.gz
| 581,563,792
| 32,428
|
# Gibbs Phenomenon and Fourier Series
a) Show the partial sum $$S = \frac{4}{\pi} \sum_{n=1}^N \frac{\sin((2n-1)t)}{2n-1}$$ which may also be written as $$\frac{2}{\pi}\int_0^x\frac{\sin(2Nt)}{\sin(t)}dt$$ has extrema at $x= \frac{m\pi}{2N}$
where $m$ is any positive integer except m=2kN, k also integer.
Solution: The derivative of $$S = \frac{2}{\pi}\frac{\sin(2Nx)}{\sin(x)} = 0$$ $$\text{where }\sin(2Nx)=0,$$ $\sin(x)$ cannot equal zero.
$\sin(x) = 0$ where $x$ is a multiple of $\pi$.
Therefore, $$\sin(2Nx)=0$$ where $x=\frac{m\pi}{2N}$
however $\sin(x)$ cannot equal zero, $$\sin(\frac{m\pi}{2N})\neq 0$$ som is any positive integer except $m=2kN$, $k$ also integer. Is this complete?!
b) Consider the first extrema to the right of the discontinuity, located at $x=\frac{\pi}{2N}$. By considering a suitable small angle formula show that the value of the sum at this point $$S(\frac{\pi}{2N})≈\frac{2}{\pi}\int_0^{\pi} \frac{\sin(u)}{u}du$$
Solution: I'm not sure which small angle formula i'm meant to considering?! Taylor series of sin? or how to consider it? I see that $$S(\frac{\pi}{2N}) = \frac{4}{\pi} (\sin(\frac{\pi}{2N})+\frac{\sin(\frac{3\pi}{2N})}{3}+\frac{\frac{\sin(5\pi)}{2N}}{5}+\ldots)$$ $$=\frac{2}{\pi}(\frac{\pi}{N}(\frac{\sin(\frac{\pi}{2N})}{\frac{\pi}{2N}}+\frac{\sin(\frac{3\pi}{2N})}{\frac{3\pi}{2N}}+\frac{\sin(\frac{5\pi}{2N})}{\frac{5\pi}{2N}}+\cdots)$$
c) and by getting a computer to evaluate this numerically show that $$S(\frac{\pi}{2N})≈1.1790$$ independently of $N$.
Not really sure how I could show this?
Hence comment on the accuracy of Fourier series at discontinuities (also known as Gibbs phenomenon). Given that the error at $\frac{π}{2N}$ is nearly constant explain why the Fourier Convergence theorem is, or is not, valid for this problem?
Where a function has a jump discontinuity, the fourier series will overshoot as it approaches the discontinuity. As the number of terms in the fouler series increases, the amount of overshoot will converge to a constant percentage (around 17.9) of the amount of the jump
could someone explain this to me please? -sorry for the long winded question!
In this answer, it is shown that the overshoot, on each side, is approximately $$\frac1\pi\int_0^\pi\frac{\sin(t)}{t}\mathrm{d}t-\frac12=0.089489872236$$ of the total jump. Thus, the overshoot you mention is twice that.
| 799
| 2,373
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.0625
| 4
|
CC-MAIN-2019-43
|
latest
|
en
| 0.841029
|
# Gibbs Phenomenon and Fourier Series
a) Show the partial sum $$S = \frac{4}{\pi} \sum_{n=1}^N \frac{\sin((2n-1)t)}{2n-1}$$ which may also be written as $$\frac{2}{\pi}\int_0^x\frac{\sin(2Nt)}{\sin(t)}dt$$ has extrema at $x= \frac{m\pi}{2N}$
where $m$ is any positive integer except m=2kN, k also integer. Solution: The derivative of $$S = \frac{2}{\pi}\frac{\sin(2Nx)}{\sin(x)} = 0$$ $$\text{where }\sin(2Nx)=0,$$ $\sin(x)$ cannot equal zero. $\sin(x) = 0$ where $x$ is a multiple of $\pi$. Therefore, $$\sin(2Nx)=0$$ where $x=\frac{m\pi}{2N}$
however $\sin(x)$ cannot equal zero, $$\sin(\frac{m\pi}{2N})\neq 0$$ som is any positive integer except $m=2kN$, $k$ also integer. Is this complete?! b) Consider the first extrema to the right of the discontinuity, located at $x=\frac{\pi}{2N}$. By considering a suitable small angle formula show that the value of the sum at this point $$S(\frac{\pi}{2N})≈\frac{2}{\pi}\int_0^{\pi} \frac{\sin(u)}{u}du$$
Solution: I'm not sure which small angle formula i'm meant to considering?! Taylor series of sin? or how to consider it? I see that $$S(\frac{\pi}{2N}) = \frac{4}{\pi} (\sin(\frac{\pi}{2N})+\frac{\sin(\frac{3\pi}{2N})}{3}+\frac{\frac{\sin(5\pi)}{2N}}{5}+\ldots)$$ $$=\frac{2}{\pi}(\frac{\pi}{N}(\frac{\sin(\frac{\pi}{2N})}{\frac{\pi}{2N}}+\frac{\sin(\frac{3\pi}{2N})}{\frac{3\pi}{2N}}+\frac{\sin(\frac{5\pi}{2N})}{\frac{5\pi}{2N}}+\cdots)$$
c) and by getting a computer to evaluate this numerically show that $$S(\frac{\pi}{2N})≈1.1790$$ independently of $N$. Not really sure how I could show this? Hence comment on the accuracy of Fourier series at discontinuities (also known as Gibbs phenomenon). Given that the error at $\frac{π}{2N}$ is nearly constant explain why the Fourier Convergence theorem is, or is not, valid for this problem? Where a function has a jump discontinuity, the fourier series will overshoot as it approaches the discontinuity. As the number of terms in the fouler series increases, the amount of overshoot will converge to a constant percentage (around 17.9) of the amount of the jump
could someone explain this to me please? -sorry for the long winded question!
|
In this answer, it is shown that the overshoot, on each side, is approximately $$\frac1\pi\int_0^\pi\frac{\sin(t)}{t}\mathrm{d}t-\frac12=0.089489872236$$ of the total jump.
|
https://electronics.stackexchange.com/questions/423463/will-linear-voltage-regulator-step-up-current/423464
| 1,566,195,672,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-35/segments/1566027314667.60/warc/CC-MAIN-20190819052133-20190819074133-00021.warc.gz
| 441,024,035
| 31,916
|
# Will linear voltage regulator step up current?
I have a regulated 9 volt 300mA power supply I want to step it down to 5 volt using Linear Voltage Regulator LM7805 , I want to know how much current can I can draw at 5 volts, will it be 300mA or will it be close to 540mA, since power = voltage * current.
• With a 9V*0.3A=2.7W supply you can only achieve >90% efficiency with an SMPS to store energy and transfer with rapid switching. – Sunnyskyguy EE75 Feb 20 at 20:32
No. A linear regulator works by burning off excess voltage as heat, therefore current in equals current out. The linear regulator is essentially throwing away the excess energy in order to regulate, rather than converting it to the output. You need a switching regulator if you want to take advantage of power in equals power out in order to convert a high input voltage, low input current into a lower output voltage, higher output current.
$$\P_{in} = P_{out}\$$
but for a linear regulator it looks like this:
$$\V_{in} \times I_{in} = (V_{out} \times I_{out}) + [(V_{in} - V_{out}) \times I_{out}]\$$
The last term in square brackets is the excess voltage being converted to heat. If we expand and simplify the right hand side, a bunch of things cancel out and we get:
$$\V_{in} \times I_{in} = V_{in} \times I_{out}\$$
Therefore:
$$\I_{in} = I_{out}\$$
No, it won't step up current. You can think of a regulator as a resistor that adjusts it's resistance to keep the voltage stable.
However, you can buy DC to DC converters that 'boost' the current. But DC to DC converters are usually called by what they do to the voltage, not the current.
A boost converter 'boosts' or steps up the voltage from a lower voltage to a higher one (at the expense of current and a small loss in power)
A buck converter or step down converter takes a higher voltage into a lower one (with potentially more current than is on the input of the converter, also with a small loss)
They actually make 78XX series DC to DC converters that are drop in compatible with linear regulators that buck or boost voltage.
• "drop in compatible" - whee, thanks! that's an improvement a hobbyist like can easily overlook – quetzalcoatl Feb 21 at 10:16
since power = voltage * current.
It is true when applied to both sides of devices that transform electricity (as AC transformers, or more sophisticated devices known as "DC-DC converters"). These devices do transform voltages/currents, so if the output voltage is lower, the output current might be higher. Keep in mind that these devices do the transformation with certain efficiency (80-90%), so the "output power" = "input power" x 0.8 practically.
In the case of linear regulators it is not true, the regulators don't "transform", they just regulate output by dissipating the excess of voltage (drop-out voltage) in its regulating elements (transistors). Therefore whatever current comes in, the same current goes out, and even a bit less, since the regulation takes some toll. For example, the old LM7805 IC will consume within itself about 4-5 mA for its "services", so if your input is strictly 300 mA, you might get only 295 mA out.
| 765
| 3,150
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.703125
| 4
|
CC-MAIN-2019-35
|
latest
|
en
| 0.911949
|
# Will linear voltage regulator step up current? I have a regulated 9 volt 300mA power supply I want to step it down to 5 volt using Linear Voltage Regulator LM7805 , I want to know how much current can I can draw at 5 volts, will it be 300mA or will it be close to 540mA, since power = voltage * current. • With a 9V*0.3A=2.7W supply you can only achieve >90% efficiency with an SMPS to store energy and transfer with rapid switching. – Sunnyskyguy EE75 Feb 20 at 20:32
No. A linear regulator works by burning off excess voltage as heat, therefore current in equals current out. The linear regulator is essentially throwing away the excess energy in order to regulate, rather than converting it to the output. You need a switching regulator if you want to take advantage of power in equals power out in order to convert a high input voltage, low input current into a lower output voltage, higher output current. $$\P_{in} = P_{out}\$$
but for a linear regulator it looks like this:
$$\V_{in} \times I_{in} = (V_{out} \times I_{out}) + [(V_{in} - V_{out}) \times I_{out}]\$$
The last term in square brackets is the excess voltage being converted to heat.
|
If we expand and simplify the right hand side, a bunch of things cancel out and we get:
$$\V_{in} \times I_{in} = V_{in} \times I_{out}\$$
Therefore:
$$\I_{in} = I_{out}\$$
No, it won't step up current.
|
https://matheducators.stackexchange.com/questions/18576/ramanujan-results-for-middle-school/18599
| 1,702,123,804,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-50/segments/1700679100909.82/warc/CC-MAIN-20231209103523-20231209133523-00269.warc.gz
| 421,662,281
| 40,822
|
# Ramanujan results for middle school?
Pls I wonder what Ramanujan's results could be explained to middle school level audience, ie without using integral etc that is up to university curriculum?
For example Ramanujan's infinite radicals could be explained easily
$$3=\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}$$
• Why Ramanujan specifically? Jul 17, 2020 at 22:52
• just his results have many infinite forms, which sound fun! @ChrisCunningham Jul 18, 2020 at 12:54
## 1 Answer
You can try the Rogers–Ramanujan identities:
• The number of partitions of $$n$$ in which adjacent parts are at least 2 apart is the same as the number of partitions of $$n$$ in which each part ends with 1,4,6,9.
• The number of partitions of $$n$$ without 1 in which adjacent parts are at least 2 apart is the same as the number of partitions of $$n$$ in which each part ends with 2,3,7,8.
For example, taking $$n=10$$:
• Partitions in which adjacent parts are at least 2 apart: $$10 = 10 = 9 + 1 = 8 + 2 = 7 + 3 = 6 + 4 = 6 + 3 + 1$$
• Partitions in which each part ends with 1,4,6,9: $$10 = 9 + 1 = 6 + 4 = 6 + 1 + 1 + 1 + 1 = 4 + 4 + 1 + 1 = 4 + 6\times 1 = 10 \times 1$$
• Partitions without 1 in which adjacent parts are at least 2 apart: $$10 = 10 = 8 + 2 = 7 + 3 = 6 + 4$$
• Partitions in which each part ends with 2,3,5,8: $$10 = 8 + 2 = 7 + 3 = 3 + 3 + 2 + 2 = 2 + 2 + 2 + 2 + 2$$
| 490
| 1,378
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.828125
| 4
|
CC-MAIN-2023-50
|
longest
|
en
| 0.936146
|
# Ramanujan results for middle school? Pls I wonder what Ramanujan's results could be explained to middle school level audience, ie without using integral etc that is up to university curriculum? For example Ramanujan's infinite radicals could be explained easily
$$3=\sqrt {1+2{\sqrt {1+3{\sqrt {1+\cdots }}}}}$$
• Why Ramanujan specifically? Jul 17, 2020 at 22:52
• just his results have many infinite forms, which sound fun! @ChrisCunningham Jul 18, 2020 at 12:54
## 1 Answer
You can try the Rogers–Ramanujan identities:
• The number of partitions of $$n$$ in which adjacent parts are at least 2 apart is the same as the number of partitions of $$n$$ in which each part ends with 1,4,6,9. • The number of partitions of $$n$$ without 1 in which adjacent parts are at least 2 apart is the same as the number of partitions of $$n$$ in which each part ends with 2,3,7,8.
|
For example, taking $$n=10$$:
• Partitions in which adjacent parts are at least 2 apart: $$10 = 10 = 9 + 1 = 8 + 2 = 7 + 3 = 6 + 4 = 6 + 3 + 1$$
• Partitions in which each part ends with 1,4,6,9: $$10 = 9 + 1 = 6 + 4 = 6 + 1 + 1 + 1 + 1 = 4 + 4 + 1 + 1 = 4 + 6\times 1 = 10 \times 1$$
• Partitions without 1 in which adjacent parts are at least 2 apart: $$10 = 10 = 8 + 2 = 7 + 3 = 6 + 4$$
• Partitions in which each part ends with 2,3,5,8: $$10 = 8 + 2 = 7 + 3 = 3 + 3 + 2 + 2 = 2 + 2 + 2 + 2 + 2$$
|
https://math.stackexchange.com/questions/1469820/area-under-quarter-circle-by-integration
| 1,638,088,187,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-49/segments/1637964358480.10/warc/CC-MAIN-20211128073830-20211128103830-00013.warc.gz
| 466,137,620
| 34,006
|
# Area under quarter circle by integration
How would one go about finding out the area under a quarter circle by integrating. The quarter circle's radius is r and the whole circle's center is positioned at the origin of the coordinates. (The quarter circle is in the first quarter of the coordinate system)
From the equation $x^2+y^2=r^2$, you may express your area as the following integral $$A=\int_0^r\sqrt{r^2-x^2}\:dx.$$ Then substitute $x=r\sin \theta$, $\theta=\arcsin (x/r)$, to get \begin{align} A&=\int_0^{\pi/2}\sqrt{r^2-r^2\sin^2 \theta}\:r\cos \theta \:d\theta\\ &=r^2\int_0^{\pi/2}\sqrt{1-\sin^2 \theta}\:\cos\theta \:d\theta\\ &=r^2\int_0^{\pi/2}\sqrt{\cos^2 \theta}\:\cos\theta \:d\theta\\ &=r^2\int_0^{\pi/2}\cos^2 \theta \:d\theta\\ &=r^2\int_0^{\pi/2}\frac{1+\cos(2\theta)}2 \:d\theta\\ &=r^2\int_0^{\pi/2}\frac12 \:d\theta+\frac{r^2}2\underbrace{\left[ \frac12\sin(2\theta)\right]_0^{\pi/2}}_{\color{#C00000}{=\:0}}\\ &=\frac{\pi}4r^2. \end{align}
• Yes, we have, for $0<x<r$, $\frac{d\theta}{dx}=\frac{1}{\sqrt{r^2-x^2}}>0$, $0=\arcsin (0/r) \leq \theta (r)\leq \arcsin (r/r)=\pi/2$. Thanks! Oct 8 '15 at 18:45
Here is a quicker solution. The area can be seen as a collection of very thin triangles, one of which is shown below. As $d\theta\to0$, the base of the triangle becomes $rd\theta$ and the height becomes $r$, so the area is $\frac12r^2d\theta$. The limits of $\theta$ are $0$ and $\frac\pi2$. $$\int_0^\frac\pi2\frac12r^2d\theta=\frac12r^2\theta|_0^\frac\pi2=\frac14\pi r^2$$
let circle: $x^2+y^2=r^2$ then consider a slab of area $dA=ydx$ then the area of quarter circle $$A_{1/4}=\int_0^r ydx=\int_0^r \sqrt{r^2-x^2}dx$$ $$=\frac12\left[x\sqrt{r^2-x^2}+r^2\sin^{-1}\left(x/r\right)\right]_0^r$$
$$=\frac12\left[0+r^2(\pi/2)\right]=\frac{\pi}{4}r^2$$ or use double integration: $$=\iint rdr d\theta= \int_0^{\pi/2}\ d\theta\int_0^R rdr=\int_0^{\pi/2}\ d\theta(R^2/2)=(R^2/2)(\pi/2)=\frac{\pi}{4}R^2$$
| 801
| 1,934
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.375
| 4
|
CC-MAIN-2021-49
|
latest
|
en
| 0.641209
|
# Area under quarter circle by integration
How would one go about finding out the area under a quarter circle by integrating. The quarter circle's radius is r and the whole circle's center is positioned at the origin of the coordinates. (The quarter circle is in the first quarter of the coordinate system)
From the equation $x^2+y^2=r^2$, you may express your area as the following integral $$A=\int_0^r\sqrt{r^2-x^2}\:dx.$$ Then substitute $x=r\sin \theta$, $\theta=\arcsin (x/r)$, to get \begin{align} A&=\int_0^{\pi/2}\sqrt{r^2-r^2\sin^2 \theta}\:r\cos \theta \:d\theta\\ &=r^2\int_0^{\pi/2}\sqrt{1-\sin^2 \theta}\:\cos\theta \:d\theta\\ &=r^2\int_0^{\pi/2}\sqrt{\cos^2 \theta}\:\cos\theta \:d\theta\\ &=r^2\int_0^{\pi/2}\cos^2 \theta \:d\theta\\ &=r^2\int_0^{\pi/2}\frac{1+\cos(2\theta)}2 \:d\theta\\ &=r^2\int_0^{\pi/2}\frac12 \:d\theta+\frac{r^2}2\underbrace{\left[ \frac12\sin(2\theta)\right]_0^{\pi/2}}_{\color{#C00000}{=\:0}}\\ &=\frac{\pi}4r^2. \end{align}
• Yes, we have, for $0<x<r$, $\frac{d\theta}{dx}=\frac{1}{\sqrt{r^2-x^2}}>0$, $0=\arcsin (0/r) \leq \theta (r)\leq \arcsin (r/r)=\pi/2$. Thanks! Oct 8 '15 at 18:45
Here is a quicker solution. The area can be seen as a collection of very thin triangles, one of which is shown below. As $d\theta\to0$, the base of the triangle becomes $rd\theta$ and the height becomes $r$, so the area is $\frac12r^2d\theta$. The limits of $\theta$ are $0$ and $\frac\pi2$.
|
$$\int_0^\frac\pi2\frac12r^2d\theta=\frac12r^2\theta|_0^\frac\pi2=\frac14\pi r^2$$
let circle: $x^2+y^2=r^2$ then consider a slab of area $dA=ydx$ then the area of quarter circle $$A_{1/4}=\int_0^r ydx=\int_0^r \sqrt{r^2-x^2}dx$$ $$=\frac12\left[x\sqrt{r^2-x^2}+r^2\sin^{-1}\left(x/r\right)\right]_0^r$$
$$=\frac12\left[0+r^2(\pi/2)\right]=\frac{\pi}{4}r^2$$ or use double integration: $$=\iint rdr d\theta= \int_0^{\pi/2}\ d\theta\int_0^R rdr=\int_0^{\pi/2}\ d\theta(R^2/2)=(R^2/2)(\pi/2)=\frac{\pi}{4}R^2$$
|
https://stats.stackexchange.com/questions/475242/how-can-i-estimate-the-probability-of-a-random-variable-from-one-population-bein
| 1,718,620,562,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198861701.67/warc/CC-MAIN-20240617091230-20240617121230-00479.warc.gz
| 503,642,805
| 41,098
|
# How can I estimate the probability of a random variable from one population being greater than all other random variables from unique populations?
Lets assume I have samples from 5 unique populations. Let's also assume I have a mean and standard deviation from each of these populations, they are normally distributed and completely independent of one another.
How can I estimate the probability that a sample of one of the populations will be greater than a sample from each of the other 4 populations?
For a example, if I have 5 types of fish (the populations) in my pond, such as bass, catfish, karp, perch and bluegill, and i'm measuring the lengths (the variables) of the fish, how do can I estimate the probability that the length of a bass I catch will be greater than the length of all the other types of fish? I think I understand how to compare 2 individual populations but can't seem to figure out how to estimate probability relative to all populations. As opposed to the probability of the bass to a catfish, and then a bass to a karp, etc., I'd like to know if its possible to reasonably estimate the probability of the length of the bass being greater that the lengths of all other populations.
Any help would be greatly appreciated! Thanks!
Edit: I believe my original solution is incorrect. I treated the events [koi > catfish] and [coy > karp] as independent when they are certainly not.
\begin{aligned} P(Y>\max\{X_1,...,X_n\})&=P(Y>X_1,...,Y>X_n)\\ &=\int_{-\infty}^{\infty} P(Y>X_1,...,Y>X_n|Y=y) f_Y(y)dy\\ &=\int_{-\infty}^{\infty} \prod_{i=1}^n \left[ P(Y>X_i|Y=y) \right]f_Y(y)dy\\ &=\int_{-\infty}^{\infty} \prod_{i=1}^n \left[ \Phi \left( \tfrac{y-\bar{x}_n}{\sigma_{x_n}} \right) \right]f_Y(y)dy \end{aligned}
I do hope that someone can provide a better solution, as the above expression seems mismatched with the relative simplicity of the question.
Let $$Y$$ represent the length of a fish from the population of interest, such as bass, and $$X_i$$ represent the length of fish from another population $$i$$, such as karp or catfish. You want to calculate the probability that the bass is longer than the longest non-bass fish. That is equivalent to the probability that the bass is longer than the carp, and the bass is longer than the catfish, and the bass is longer than the perch, etc. $$P(Y>\max\{X_1,...,X_n\})=P(Y>X_1,...,Y>X_n)$$
Because the lengths of your fish are independently distributed, the probability of all of these events happening is the product of the individual probabilities.
$$P(Y>X_1,...,Y>X_n) =\prod_{i=1}^{n} P(Y>X_i)$$
So the probability that bass is longer than all of your other fish is found by multiplying the probabilities that the bass is larger than each other type of fish.
That leaves only the problem of calculating the probability that a fish from one normal distribution is longer than a fish from another normal distribution. That is, $$P(Y>X_i)$$. To calculate this probability we rewrite it (ignoring the subscript) in the form $$P(Y>X)=P(Y-X>0)$$
Thankfully, the distribution of $$Y-X$$ is simple in the case where $$X$$ and $$Y$$ are normally distributed. That is, $$X \sim N(\mu_{X},\sigma_{X})$$ and $$Y \sim N(\mu_{Y},\sigma_{Y})$$. We can use the following facts:
• Any linear combination of independent normal random variables (ie. $$aX+bY$$) is itself a normal random variable.
• $$\mathbb{V}(aX+bY)=a^2\mathbb{V}(X)+b^2\mathbb{V}(Y)$$ for any uncorrelated random variables $$X$$ and $$Y$$.
• $$\mathbb{E}(aX+bY) = a\mathbb{E}(X)+b\mathbb{E}(Y)$$ for any random variables $$X$$ and $$Y$$.
In this problem, the difference in the lengths of the two fish $$D=Y-X=(1)X+(-1)Y$$ is a linear combination of the two lengths, $$X$$ and $$Y$$. Therefore, using the facts above, we find that the distribution of the difference in lengths is
$$D\sim N(\mu_Y-\mu_X,\sigma^2_X+\sigma^2_Y)$$
The probability that this difference is greater than zero is
$$P(D>0)=1-P(D<0)=1-F_D(0)=1-\Phi \left(\frac{0-\mu_D}{\sigma_D} \right)$$
In terms of $$X$$ and $$Y$$ this is
$$P(Y-X>0)=1-\Phi \left(\frac{\mu_X-\mu_Y}{\sqrt{\sigma^2_X+\sigma^2_Y}}\right)$$
The final solution, in all its glory, would then be:
$$P(Y>\max\{X_1,...,X_n\})=\prod_{i=1}^{n} 1-\Phi \left(\frac{\mu_{X_i}-\mu_Y}{\sqrt{\sigma^2_{X_i}+\sigma^2_Y}}\right)$$
• Presumably your operator "$\cap$" means ordinary multiplication of numbers, because both its arguments (being probabilities) are numbers. Maybe there's a typo there? "This extends to" hides the content of the answer--it needs elaboration. The meaning of "alternatively" is not evident and so needs elaboration, too.
– whuber
Commented Jul 2, 2020 at 20:05
• Thanks @whuber. Hopefully, the edited answer is clearer. Commented Jul 2, 2020 at 20:45
• It is, thank you (+1). I can't help thinking, though, that the OP might welcome some words about how the individual probabilities $P(Y\gt X_i)$ might be estimated or calculated.
– whuber
Commented Jul 2, 2020 at 20:59
• One thing i'm struggling to understand, is after I find the product that the bass is larger than the karp, the catfish, etc., I do the same for each fish (the karp being larger than all others, the catfish being larger than all others, etc.). Wouldn't the sum of the probabilities of each fish being larger than all others be equal to 1? i'm not getting anywhere close to that, maybe i'm not understanding why it wouldn't equal 1? Surely one of the fish will be larger than all others? I can provide numbers and show what i'm coming up with if that helps. Commented Jul 9, 2020 at 15:43
• @mc_chief Thank you for that excellent observation. My answer is very likely mistaken. I believe I treat the case where [koi > catfish] and [coy > karp] are independent events. In reality, they are not. I'll correct this in a new answer ASAP. Commented Jul 9, 2020 at 17:31
| 1,641
| 5,845
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.03125
| 4
|
CC-MAIN-2024-26
|
latest
|
en
| 0.928478
|
# How can I estimate the probability of a random variable from one population being greater than all other random variables from unique populations? Lets assume I have samples from 5 unique populations. Let's also assume I have a mean and standard deviation from each of these populations, they are normally distributed and completely independent of one another. How can I estimate the probability that a sample of one of the populations will be greater than a sample from each of the other 4 populations? For a example, if I have 5 types of fish (the populations) in my pond, such as bass, catfish, karp, perch and bluegill, and i'm measuring the lengths (the variables) of the fish, how do can I estimate the probability that the length of a bass I catch will be greater than the length of all the other types of fish? I think I understand how to compare 2 individual populations but can't seem to figure out how to estimate probability relative to all populations. As opposed to the probability of the bass to a catfish, and then a bass to a karp, etc., I'd like to know if its possible to reasonably estimate the probability of the length of the bass being greater that the lengths of all other populations. Any help would be greatly appreciated! Thanks! Edit: I believe my original solution is incorrect. I treated the events [koi > catfish] and [coy > karp] as independent when they are certainly not. \begin{aligned} P(Y>\max\{X_1,...,X_n\})&=P(Y>X_1,...,Y>X_n)\\ &=\int_{-\infty}^{\infty} P(Y>X_1,...,Y>X_n|Y=y) f_Y(y)dy\\ &=\int_{-\infty}^{\infty} \prod_{i=1}^n \left[ P(Y>X_i|Y=y) \right]f_Y(y)dy\\ &=\int_{-\infty}^{\infty} \prod_{i=1}^n \left[ \Phi \left( \tfrac{y-\bar{x}_n}{\sigma_{x_n}} \right) \right]f_Y(y)dy \end{aligned}
I do hope that someone can provide a better solution, as the above expression seems mismatched with the relative simplicity of the question. Let $$Y$$ represent the length of a fish from the population of interest, such as bass, and $$X_i$$ represent the length of fish from another population $$i$$, such as karp or catfish. You want to calculate the probability that the bass is longer than the longest non-bass fish. That is equivalent to the probability that the bass is longer than the carp, and the bass is longer than the catfish, and the bass is longer than the perch, etc. $$P(Y>\max\{X_1,...,X_n\})=P(Y>X_1,...,Y>X_n)$$
Because the lengths of your fish are independently distributed, the probability of all of these events happening is the product of the individual probabilities. $$P(Y>X_1,...,Y>X_n) =\prod_{i=1}^{n} P(Y>X_i)$$
So the probability that bass is longer than all of your other fish is found by multiplying the probabilities that the bass is larger than each other type of fish. That leaves only the problem of calculating the probability that a fish from one normal distribution is longer than a fish from another normal distribution. That is, $$P(Y>X_i)$$. To calculate this probability we rewrite it (ignoring the subscript) in the form $$P(Y>X)=P(Y-X>0)$$
Thankfully, the distribution of $$Y-X$$ is simple in the case where $$X$$ and $$Y$$ are normally distributed. That is, $$X \sim N(\mu_{X},\sigma_{X})$$ and $$Y \sim N(\mu_{Y},\sigma_{Y})$$. We can use the following facts:
• Any linear combination of independent normal random variables (ie. $$aX+bY$$) is itself a normal random variable. • $$\mathbb{V}(aX+bY)=a^2\mathbb{V}(X)+b^2\mathbb{V}(Y)$$ for any uncorrelated random variables $$X$$ and $$Y$$. • $$\mathbb{E}(aX+bY) = a\mathbb{E}(X)+b\mathbb{E}(Y)$$ for any random variables $$X$$ and $$Y$$. In this problem, the difference in the lengths of the two fish $$D=Y-X=(1)X+(-1)Y$$ is a linear combination of the two lengths, $$X$$ and $$Y$$.
|
Therefore, using the facts above, we find that the distribution of the difference in lengths is
$$D\sim N(\mu_Y-\mu_X,\sigma^2_X+\sigma^2_Y)$$
The probability that this difference is greater than zero is
$$P(D>0)=1-P(D<0)=1-F_D(0)=1-\Phi \left(\frac{0-\mu_D}{\sigma_D} \right)$$
In terms of $$X$$ and $$Y$$ this is
$$P(Y-X>0)=1-\Phi \left(\frac{\mu_X-\mu_Y}{\sqrt{\sigma^2_X+\sigma^2_Y}}\right)$$
The final solution, in all its glory, would then be:
$$P(Y>\max\{X_1,...,X_n\})=\prod_{i=1}^{n} 1-\Phi \left(\frac{\mu_{X_i}-\mu_Y}{\sqrt{\sigma^2_{X_i}+\sigma^2_Y}}\right)$$
• Presumably your operator "$\cap$" means ordinary multiplication of numbers, because both its arguments (being probabilities) are numbers.
|
https://stats.stackexchange.com/questions/372895/how-parameters-formulated-for-simple-regression-model
| 1,717,074,164,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-22/segments/1715971667627.93/warc/CC-MAIN-20240530114606-20240530144606-00683.warc.gz
| 473,174,625
| 39,760
|
# How parameters formulated for Simple Regression Model
I am reading Simple Regression Model from this book, Section 6.5 (page 267 in downloaded pdf, 276 if viewed online).
The author starts with below equation for a simple linear regression model,
$$Y_i = \alpha_1 + \beta x_i + \varepsilon_i$$
And then after few lines, he lets for conveience that, $$\alpha_1 = \alpha - \beta\overline{x}$$ so that,
$$Y_i = \alpha + \beta(x_i - \overline{x}) + \varepsilon_i$$
where $$\overline{x} = \dfrac{1}{n}\sum\limits_{i=1}^nx_i$$
My questions:
1. It is not convincing to bring in $$\overline{x}$$ just for convenience sake in the equation. Can any one please explain the logic behind bring that in the equation?
2. After above equation, the author says, $$Y_i$$ is equal to a nonrandom quantity, $$\alpha + \beta(x_i - \overline{x})$$, plus a mean zero normal random variable $$\varepsilon_i$$. Does that mean, $$\alpha + \beta(x_i - \overline{x})$$ has no randomness involved in that?
Kindly help.
1. $$\alpha_1$$s in two equations are different. Let $$\alpha_2$$ be the $$\alpha$$ in the second equation, then $$\alpha_1 = \alpha_2 + \beta \bar x$$
At the time that the computer was not popular or had no computer, the line was fit by using calculators. Bringing in $$\bar x$$ is really simplified the computation.
1. From the first equation, $$\epsilon$$ is the only random component. So source of randomness of $$Y$$ is $$\epsilon$$, the other parts $$\alpha + \beta x$$ are known or unknown constant.
• I just corrected $\alpha_1$ to $\alpha$ in 2nd equation. Still the reason is not convincing that it simplified the computation. Can you kindly elaborate further? How could $\overline{x}$ suddenly enter the equation without an associated mathematical logic. Oct 20, 2018 at 18:32
• Let $z_i=x_i-\bar x$, then (1) $\sum z_i = 0$ vs calculating $\sum x_i$, (2) $\sum z_i^2$ is easier easier than $\sum x_i^2$, and (3) $\sum z_iY_i$ is easier easier than $\sum x_iY_i$. introducing $\bar x$ into equation does not change anything in equation, similar to $+ a - a$ , which we used to proof something in math. Oct 20, 2018 at 18:43
• $Y_i = \alpha_1 + \beta x_i + \varepsilon_i$ ==> $Y_i = \alpha_1 + \beta x_i + \varepsilon_i - \beta \bar x + \beta \bar x$ ==> $Y_i = (\alpha_1 +\beta \bar x) + \beta (x_i - \bar x) + \varepsilon_i$ ==> $Y_i = \alpha + \beta (x_i - \bar x) + \varepsilon_i$ Oct 20, 2018 at 18:56
| 731
| 2,417
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.71875
| 4
|
CC-MAIN-2024-22
|
latest
|
en
| 0.873219
|
# How parameters formulated for Simple Regression Model
I am reading Simple Regression Model from this book, Section 6.5 (page 267 in downloaded pdf, 276 if viewed online). The author starts with below equation for a simple linear regression model,
$$Y_i = \alpha_1 + \beta x_i + \varepsilon_i$$
And then after few lines, he lets for conveience that, $$\alpha_1 = \alpha - \beta\overline{x}$$ so that,
$$Y_i = \alpha + \beta(x_i - \overline{x}) + \varepsilon_i$$
where $$\overline{x} = \dfrac{1}{n}\sum\limits_{i=1}^nx_i$$
My questions:
1. It is not convincing to bring in $$\overline{x}$$ just for convenience sake in the equation. Can any one please explain the logic behind bring that in the equation? 2. After above equation, the author says, $$Y_i$$ is equal to a nonrandom quantity, $$\alpha + \beta(x_i - \overline{x})$$, plus a mean zero normal random variable $$\varepsilon_i$$. Does that mean, $$\alpha + \beta(x_i - \overline{x})$$ has no randomness involved in that? Kindly help. 1. $$\alpha_1$$s in two equations are different. Let $$\alpha_2$$ be the $$\alpha$$ in the second equation, then $$\alpha_1 = \alpha_2 + \beta \bar x$$
At the time that the computer was not popular or had no computer, the line was fit by using calculators. Bringing in $$\bar x$$ is really simplified the computation. 1. From the first equation, $$\epsilon$$ is the only random component. So source of randomness of $$Y$$ is $$\epsilon$$, the other parts $$\alpha + \beta x$$ are known or unknown constant. • I just corrected $\alpha_1$ to $\alpha$ in 2nd equation. Still the reason is not convincing that it simplified the computation. Can you kindly elaborate further? How could $\overline{x}$ suddenly enter the equation without an associated mathematical logic. Oct 20, 2018 at 18:32
• Let $z_i=x_i-\bar x$, then (1) $\sum z_i = 0$ vs calculating $\sum x_i$, (2) $\sum z_i^2$ is easier easier than $\sum x_i^2$, and (3) $\sum z_iY_i$ is easier easier than $\sum x_iY_i$. introducing $\bar x$ into equation does not change anything in equation, similar to $+ a - a$ , which we used to proof something in math.
|
Oct 20, 2018 at 18:43
• $Y_i = \alpha_1 + \beta x_i + \varepsilon_i$ ==> $Y_i = \alpha_1 + \beta x_i + \varepsilon_i - \beta \bar x + \beta \bar x$ ==> $Y_i = (\alpha_1 +\beta \bar x) + \beta (x_i - \bar x) + \varepsilon_i$ ==> $Y_i = \alpha + \beta (x_i - \bar x) + \varepsilon_i$ Oct 20, 2018 at 18:56
|
https://math.stackexchange.com/questions/487102/why-is-the-map-gl-nk-times-gl-nk-to-gl-nk-regular
| 1,627,633,953,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-31/segments/1627046153934.85/warc/CC-MAIN-20210730060435-20210730090435-00078.warc.gz
| 401,291,441
| 38,180
|
# Why is the map: $GL_n(K)\times GL_n(K) \to GL_n(K)$ regular?
Let $K$ be a field and $GL_n(K)$ the set of all invertible $n$ by $n$ matrices over $K$. Let $m: GL_n(K)\times GL_n(K) \to GL_n(K)$ be the usual multiplication of matrices. Why the map $m$ is regular? Thank you very much.
• The maps are just polynomials in the entries? In particular, if you identify $\text{GL}_n(K)$ as the set of pairs $(A,B)$ in $K^{n^2}$ such that $AB=1$ (where you use regular matrix multiplication), then this is obvious. – Alex Youcis Sep 8 '13 at 5:02
• @AlexYoucis, thank you very much. But what is the product of $(A, B)$ and $(C, D)$ in $K^{n^2}$? – LJR Sep 8 '13 at 5:07
• I am not saying you should. If $R$ is an algebraic ring, then you show that $R^\times$ is a variety by identifying it with the set $(x,y)$ in $R^2$ with $xy=1$. For example, $k^\times$ is an affine $k$-variety, isomorphic to the set $xy=1$ in $\mathbb{A}^2$. – Alex Youcis Sep 8 '13 at 5:12
## 1 Answer
First forget $GL_n(K)$ and work in $M_n(K)$. The multiplication map $$M_n(K)\times M_n(K)\to M_n(K)$$ is polynomial in the entries: $$((x_{ij})_{ij}, (y_{kl})_{kl})\mapsto (\sum_{r} x_{ir}y_{rl})_{il},$$ so it is a regular map. When you restrict to $GL_n(K)$, you get a regular map $$GL_n(K)\times GL_n(K)\to M_n(K).$$ As the multiplication lands in $GL_n(K)$, you get the statement you want to prove.
• thank you very much. But the multiplication is the multiplication of matrices. Why it is a polynomial? – LJR Sep 8 '13 at 11:13
• @IJR: a matrix $(x_{ij})_{ij}$ is viewed as element of $K^{n^2}$. – Cantlog Sep 8 '13 at 11:41
| 549
| 1,601
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.671875
| 4
|
CC-MAIN-2021-31
|
latest
|
en
| 0.777803
|
# Why is the map: $GL_n(K)\times GL_n(K) \to GL_n(K)$ regular? Let $K$ be a field and $GL_n(K)$ the set of all invertible $n$ by $n$ matrices over $K$. Let $m: GL_n(K)\times GL_n(K) \to GL_n(K)$ be the usual multiplication of matrices. Why the map $m$ is regular? Thank you very much. • The maps are just polynomials in the entries? In particular, if you identify $\text{GL}_n(K)$ as the set of pairs $(A,B)$ in $K^{n^2}$ such that $AB=1$ (where you use regular matrix multiplication), then this is obvious. – Alex Youcis Sep 8 '13 at 5:02
• @AlexYoucis, thank you very much. But what is the product of $(A, B)$ and $(C, D)$ in $K^{n^2}$? – LJR Sep 8 '13 at 5:07
• I am not saying you should. If $R$ is an algebraic ring, then you show that $R^\times$ is a variety by identifying it with the set $(x,y)$ in $R^2$ with $xy=1$.
|
For example, $k^\times$ is an affine $k$-variety, isomorphic to the set $xy=1$ in $\mathbb{A}^2$.
|
http://math.stackexchange.com/questions/tagged/linear-algebra+determinant
| 1,406,757,197,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2014-23/segments/1406510271654.40/warc/CC-MAIN-20140728011751-00388-ip-10-146-231-18.ec2.internal.warc.gz
| 177,665,836
| 23,837
|
# Tagged Questions
30 views
### The determinant of adjugate matrix
Why does $\det(\text{adj}(A)) = 0$ if $\det(A) = 0$? (without using the formula $\det(\text{adj}(A)) = \det(A)^{n-1}.)$
65 views
### Determinant of the linear map given by conjugation.
Let $S$ denote the space of skew-symmetric $n\times n$ real matrices, where every element $A\in S$ satisfies $A^T+A = 0$. Let $M$ denote an orthogonal $n\times n$ matrix, and $L_M$ denotes the ...
65 views
### Maximum determinant of a $m\times m$ - matrix with entries $1..n$
I want to find the maximal possible determinant of a $m\times m$ - matrix A with entries $1..n$. Conjecture 1 : The maximum possible determinant can be achieved by a matrix only ...
64 views
### Surprising necessary condition for a “shift-invariant” determinant
Let $A$ be a $4\ x\ 4$ binary matrix and $Z=\pmatrix {s&s&s&s \\ s&s&s&s \\s&s&s&s \\s&s&s&s}$ Then $\det(A+Z)=\det(A)=1\$ (independent of s, so ...
87 views
### Simple proof that a $3\times 3$-matrix with entries $s$ or $s+1$ cannot have determinant $\pm 1$, if $s>1$.
Let $s>1$ and $A$ be a $3\times 3$ matrix with entries $s$ or $s+1$. Then $\det(A)\ne \pm 1$. The determinant has the form $as+b$ with integers $a$,$b$ and it has to be proven that $a>0$ if ...
32 views
### Determinant of a matrix shifted by m
Let $A$ be an $n\times n$ matrix and $Z$ be the $n\times n$ matrix, whose entries are all $m$. Let $S$ be the sum of all the adjoints of $A$. Then my conjecture is $\det(A+Z)=\det(A)+Sm$ , in ...
31 views
### Relation on the determinant of a matrix and the product of its diagonal entries?
Let $A$ be a $3\times 3$ symmetric matrix, with three real eigenvalues $\lambda_1,\lambda_2,\lambda_3$, and diagonal entries $a_1,a_2,a_3$, is it true that \begin{equation*} \det ...
105 views
### Prove that if the sum of each row of A equals s, then s is an eigenvalue of A. [duplicate]
Consider an $n \times n$ matrix $A$ with the property that the row sums all equal the same number $s$. Show that $s$ is an eigenvalue of $A$. [Hint: Find an eigenvector] My attempt: By definition: ...
34 views
### How to factor and reduce a huge determinant to simpler form? Linear Algebra
So, I have learned about cofactor expansion. But the cofactor expansion I know doesn't reduce the number of rows and colums to one matrix. I usually pick a colum, multiply each element in the column ...
48 views
37 views
### Determinant (or positive definiteness) of a Hankel matrix
I need to prove that the Hankel matrix given by $a_{ij}=\frac{1}{i+j}$ is positive definite. It turns out that it is a special case of the Cauchy matrices, and the determinant is given by the Cauchy ...
85 views
### Find the expansion for $\det(I+\epsilon A)$ where $\epsilon$ is small without using eigenvalue.
I'm taking a linear algebra course and the professor included the problem that prove $$\rm{det}(I+\epsilon A) = 1 + \epsilon\,\rm{tr}\,A + o(\epsilon)$$ Since the professor hasn't covered the ...
17 views
### Bound on the degree of a determinant of a polynomial matrix
I want to implement a modular algorithm for computing the determinant of a square Matrix with multivariate polynomials in $\mathbb{Z}$ as components (symbolically). My idea is first to reduce the ...
In order get the determinant of\begin{pmatrix} \lambda-n-1 & 1 & 2 & 2 & 1 & 1 & 1& 1 & \cdots &1 & 1 \\ 1 & \lambda-2n+4 & 1 & 2 & 2 &2 ...
### Prove or disprove : $\det(A^k + B^k) \geq 0$
This question came from here. As the OP hasn't edited his question and I really want the answer, I'm adding my thoughts. Let $A, B$ be two real $n\times n$ matrices that commute and \$\det(A + ...
| 1,081
| 3,648
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0}
| 3.625
| 4
|
CC-MAIN-2014-23
|
latest
|
en
| 0.712426
|
# Tagged Questions
30 views
### The determinant of adjugate matrix
Why does $\det(\text{adj}(A)) = 0$ if $\det(A) = 0$? (without using the formula $\det(\text{adj}(A)) = \det(A)^{n-1}. )$
65 views
### Determinant of the linear map given by conjugation. Let $S$ denote the space of skew-symmetric $n\times n$ real matrices, where every element $A\in S$ satisfies $A^T+A = 0$. Let $M$ denote an orthogonal $n\times n$ matrix, and $L_M$ denotes the ...
65 views
### Maximum determinant of a $m\times m$ - matrix with entries $1..n$
I want to find the maximal possible determinant of a $m\times m$ - matrix A with entries $1..n$. Conjecture 1 : The maximum possible determinant can be achieved by a matrix only ...
64 views
### Surprising necessary condition for a “shift-invariant” determinant
Let $A$ be a $4\ x\ 4$ binary matrix and $Z=\pmatrix {s&s&s&s \\ s&s&s&s \\s&s&s&s \\s&s&s&s}$ Then $\det(A+Z)=\det(A)=1\$ (independent of s, so ...
87 views
### Simple proof that a $3\times 3$-matrix with entries $s$ or $s+1$ cannot have determinant $\pm 1$, if $s>1$. Let $s>1$ and $A$ be a $3\times 3$ matrix with entries $s$ or $s+1$. Then $\det(A)\ne \pm 1$. The determinant has the form $as+b$ with integers $a$,$b$ and it has to be proven that $a>0$ if ...
32 views
### Determinant of a matrix shifted by m
Let $A$ be an $n\times n$ matrix and $Z$ be the $n\times n$ matrix, whose entries are all $m$. Let $S$ be the sum of all the adjoints of $A$. Then my conjecture is $\det(A+Z)=\det(A)+Sm$ , in ...
31 views
### Relation on the determinant of a matrix and the product of its diagonal entries? Let $A$ be a $3\times 3$ symmetric matrix, with three real eigenvalues $\lambda_1,\lambda_2,\lambda_3$, and diagonal entries $a_1,a_2,a_3$, is it true that \begin{equation*} \det ...
105 views
### Prove that if the sum of each row of A equals s, then s is an eigenvalue of A. [duplicate]
Consider an $n \times n$ matrix $A$ with the property that the row sums all equal the same number $s$. Show that $s$ is an eigenvalue of $A$. [Hint: Find an eigenvector] My attempt: By definition: ...
34 views
### How to factor and reduce a huge determinant to simpler form? Linear Algebra
So, I have learned about cofactor expansion. But the cofactor expansion I know doesn't reduce the number of rows and colums to one matrix. I usually pick a colum, multiply each element in the column ...
48 views
37 views
### Determinant (or positive definiteness) of a Hankel matrix
I need to prove that the Hankel matrix given by $a_{ij}=\frac{1}{i+j}$ is positive definite. It turns out that it is a special case of the Cauchy matrices, and the determinant is given by the Cauchy ...
85 views
### Find the expansion for $\det(I+\epsilon A)$ where $\epsilon$ is small without using eigenvalue.
|
I'm taking a linear algebra course and the professor included the problem that prove $$\rm{det}(I+\epsilon A) = 1 + \epsilon\,\rm{tr}\,A + o(\epsilon)$$ Since the professor hasn't covered the ...
17 views
### Bound on the degree of a determinant of a polynomial matrix
I want to implement a modular algorithm for computing the determinant of a square Matrix with multivariate polynomials in $\mathbb{Z}$ as components (symbolically).
|
https://math.stackexchange.com/questions/3274426/prove-sum-k-1n-frac-lefth-kp-right2kp-frac13h-np3-h
| 1,652,948,047,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00385.warc.gz
| 448,028,897
| 65,653
|
# Prove $\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\frac13((H_n^{(p)})^3-H_n^{(3p)})+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$
Find $$\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}\,,$$ where $$H_k^{(p)}=1+\frac1{2^p}+\cdots+\frac1{k^p}$$ is the $$k$$th generalized harmonic number of order $$p$$.
Cornel proved in his book, (almost) impossible integral, sums and series, the following identity :
$$\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\frac13\left(\left(H_n^{(p)}\right)^3-H_n^{(3p)}\right)+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$$
using series manipulations and he also suggested that this identity can be proved using Abel's summation and I was successful in proving it that way. other approaches are appreciated.
I am posting this problem as its' importance appears when $$n$$ approaches $$\infty$$.
using Abel's summation $$\ \displaystyle\sum_{k=1}^n a_k b_k=A_nb_{n+1}+\sum_{k=1}^{n}A_k\left(b_k-b_{k+1}\right)$$ where $$\displaystyle A_n=\sum_{i=1}^n a_i$$
letting $$\ \displaystyle a_k=\frac{1}{k^p}$$ and $$\ \displaystyle b_k=\left(H_k^{(p)}\right)^2$$, we get \begin{align} S&=\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\sum_{i=1}^n\frac{\left(H_{n+1}^{(p)}\right)^2}{i^p}+\sum_{k=1}^n\left(\sum_{i=1}^k\frac1{i^p}\right)\left(\left(H_k^{(p)}\right)^2-\left(H_{k+1}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}+\sum_{k=1}^n\left(H_k^{(p)}\right)\left(\left(H_k^{(p)}\right)^2-\left(H_{k+1}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}+\sum_{k=1}^{n+1}\left(H_{k-1}^{(p)}\right)\left(\left(H_{k-1}^{(p)}\right)^2-\left(H_{k}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\sum_{k=1}^{n+1}\left(H_{k-1}^{(p)}\right)\left(\frac{2H_k^{(p)}}{k^p}-\frac1{k^{2p}}\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\sum_{k=1}^{n}\left(H_{k}^{(p)}-\frac1{k^p}\right)\left(\frac{2H_k^{(p)}}{k^p}-\frac1{k^{2p}}\right)-\left(H_{n}^{(p)}\right)\left(\frac{2H_{n+1}^{(p)}}{(n+1)^p}-\frac1{{(n+1)}^{2p}}\right)\\ &=\underbrace{\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\left(H_{n}^{(p)}\right)\left(\frac{2H_{n+1}^{(p)}}{(n+1)^p}-\frac1{{(n+1)}^{2p}}\right)}_{\Large\left(H_n^{(p)}\right)^3}-2S+3\sum_{k=1}^n\frac{H_k^{(p)}}{k^{(2p)}}-H_n^{(3p)}\\ &=-2S+3\sum_{k=1}^n\frac{H_k^{(p)}}{k^{(2p)}}+\left(H_n^{(p)}\right)^3-H_n^{(3p)} \end{align}
which follows $$S=\frac13\left(\left(H_n^{(p)}\right)^3-H_n^{(3p)}\right)+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$$
• Keeping track of the summation indices it seems like there is small typo or error while going from the upper bound $n$ to $n+1$ as the lower bound remains uneffected afterall. Is this intended, since I cannot make sense out it right now. Jun 26, 2019 at 7:29
| 1,237
| 2,690
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.15625
| 4
|
CC-MAIN-2022-21
|
latest
|
en
| 0.42594
|
# Prove $\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\frac13((H_n^{(p)})^3-H_n^{(3p)})+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$
Find $$\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}\,,$$ where $$H_k^{(p)}=1+\frac1{2^p}+\cdots+\frac1{k^p}$$ is the $$k$$th generalized harmonic number of order $$p$$. Cornel proved in his book, (almost) impossible integral, sums and series, the following identity :
$$\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\frac13\left(\left(H_n^{(p)}\right)^3-H_n^{(3p)}\right)+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$$
using series manipulations and he also suggested that this identity can be proved using Abel's summation and I was successful in proving it that way. other approaches are appreciated. I am posting this problem as its' importance appears when $$n$$ approaches $$\infty$$.
|
using Abel's summation $$\ \displaystyle\sum_{k=1}^n a_k b_k=A_nb_{n+1}+\sum_{k=1}^{n}A_k\left(b_k-b_{k+1}\right)$$ where $$\displaystyle A_n=\sum_{i=1}^n a_i$$
letting $$\ \displaystyle a_k=\frac{1}{k^p}$$ and $$\ \displaystyle b_k=\left(H_k^{(p)}\right)^2$$, we get \begin{align} S&=\sum_{k=1}^n\frac{\left(H_k^{(p)}\right)^2}{k^p}=\sum_{i=1}^n\frac{\left(H_{n+1}^{(p)}\right)^2}{i^p}+\sum_{k=1}^n\left(\sum_{i=1}^k\frac1{i^p}\right)\left(\left(H_k^{(p)}\right)^2-\left(H_{k+1}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}+\sum_{k=1}^n\left(H_k^{(p)}\right)\left(\left(H_k^{(p)}\right)^2-\left(H_{k+1}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}+\sum_{k=1}^{n+1}\left(H_{k-1}^{(p)}\right)\left(\left(H_{k-1}^{(p)}\right)^2-\left(H_{k}^{(p)}\right)^2\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\sum_{k=1}^{n+1}\left(H_{k-1}^{(p)}\right)\left(\frac{2H_k^{(p)}}{k^p}-\frac1{k^{2p}}\right)\\ &=\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\sum_{k=1}^{n}\left(H_{k}^{(p)}-\frac1{k^p}\right)\left(\frac{2H_k^{(p)}}{k^p}-\frac1{k^{2p}}\right)-\left(H_{n}^{(p)}\right)\left(\frac{2H_{n+1}^{(p)}}{(n+1)^p}-\frac1{{(n+1)}^{2p}}\right)\\ &=\underbrace{\left(H_{n+1}^{(p)}\right)^2H_n^{(p)}-\left(H_{n}^{(p)}\right)\left(\frac{2H_{n+1}^{(p)}}{(n+1)^p}-\frac1{{(n+1)}^{2p}}\right)}_{\Large\left(H_n^{(p)}\right)^3}-2S+3\sum_{k=1}^n\frac{H_k^{(p)}}{k^{(2p)}}-H_n^{(3p)}\\ &=-2S+3\sum_{k=1}^n\frac{H_k^{(p)}}{k^{(2p)}}+\left(H_n^{(p)}\right)^3-H_n^{(3p)} \end{align}
which follows $$S=\frac13\left(\left(H_n^{(p)}\right)^3-H_n^{(3p)}\right)+\sum_{k=1}^n\frac{H_k^{(p)}}{k^{2p}}$$
• Keeping track of the summation indices it seems like there is small typo or error while going from the upper bound $n$ to $n+1$ as the lower bound remains uneffected afterall.
|
https://math.stackexchange.com/questions/1575088/find-the-value-of-the-series-sum-limits-n-1-infty-fracn2n
| 1,653,555,834,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00269.warc.gz
| 455,171,661
| 60,563
|
# Find the value of the series $\sum\limits_{n=1}^ \infty \frac{n}{2^n}$ [duplicate]
Find the value of the series $\sum\limits_{n=1}^ \infty \dfrac{n}{2^n}$
The series on expanding is coming as $\dfrac{1}{2}+\dfrac{2}{2^2}+..$
I tried using the form of $(1+x)^n=1+nx+\dfrac{n(n-1)}{2}x^2+..$ and then differentiating it but still it is not coming .What shall I do with this?
• This might help
– user297008
Dec 14, 2015 at 12:36
• Looks like the derivative of a geometric series to me Dec 14, 2015 at 12:37
• See this for other ideas. Dec 14, 2015 at 12:38
• Just differentiate $\frac{1}{2(1-x)}=\frac12\sum x^n$ and set $x=\frac12$. Dec 14, 2015 at 12:39
$$\sum_{n=1}^{\infty}\frac{n}{2^n}=\lim_{m\to\infty}\sum_{n=1}^{m}\frac{n}{2^n}=\lim_{m\to\infty}\frac{-m+2^{m+1}-2}{2^m}=$$ $$\lim_{m\to\infty}\frac{-2^{1-m}+2-2^{-m}m}{1}=\frac{0+2-0}{1}=\frac{2}{1}=2$$
| 369
| 864
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.09375
| 4
|
CC-MAIN-2022-21
|
latest
|
en
| 0.62588
|
# Find the value of the series $\sum\limits_{n=1}^ \infty \frac{n}{2^n}$ [duplicate]
Find the value of the series $\sum\limits_{n=1}^ \infty \dfrac{n}{2^n}$
The series on expanding is coming as $\dfrac{1}{2}+\dfrac{2}{2^2}+..$
I tried using the form of $(1+x)^n=1+nx+\dfrac{n(n-1)}{2}x^2+..$ and then differentiating it but still it is not coming .What shall I do with this? • This might help
– user297008
Dec 14, 2015 at 12:36
• Looks like the derivative of a geometric series to me Dec 14, 2015 at 12:37
• See this for other ideas. Dec 14, 2015 at 12:38
• Just differentiate $\frac{1}{2(1-x)}=\frac12\sum x^n$ and set $x=\frac12$.
|
Dec 14, 2015 at 12:39
$$\sum_{n=1}^{\infty}\frac{n}{2^n}=\lim_{m\to\infty}\sum_{n=1}^{m}\frac{n}{2^n}=\lim_{m\to\infty}\frac{-m+2^{m+1}-2}{2^m}=$$ $$\lim_{m\to\infty}\frac{-2^{1-m}+2-2^{-m}m}{1}=\frac{0+2-0}{1}=\frac{2}{1}=2$$
|
https://math.stackexchange.com/questions/2455714/prove-int-0-pi-2-x-left-sin-nx-over-sin-x-right4-mathrmdxn2-pi2
| 1,563,631,112,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-30/segments/1563195526517.67/warc/CC-MAIN-20190720132039-20190720154039-00153.warc.gz
| 470,548,607
| 35,894
|
# Prove $\int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x<{n^2\pi^2\over 8}$
Prove $$\int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x<{n^2\pi^2\over 8}.$$
My attempt: \begin{align} \int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x & =\sum_{k=1}^n \int_{{k-1\over 2n}\pi}^{{k\over 2n}\pi}x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x\\ & \leq\sum_{k=1}^n \left({\pi\over 2}\right)^4 \int_{{k-1\over 2n}\pi}^{{k\over 2n}\pi}\left({\sin^4nx\over x^3}\right)\mathrm{d}x \quad (\text{use } \sin x\geq {2\over \pi}x ) \tag{1}\label{1}\\ &= \left({\pi\over 2}\right)^4 n^2\sum_{k=1}^n \int_{{k-1\over 2}\pi}^{{k\over 2}\pi}\left({\sin^4x\over x^3}\right)\mathrm{d}x \quad (\text{use } x\to {x\over n}).\\ \end{align} Is my direction right? If right, how can I prove the following $$\sum_{k=1}^n \int_{{k-1\over 2}\pi}^{{k\over 2}\pi}\left({\sin^4x\over x^3}\right)\mathrm{d}x\leq\int_0^{+\infty}\left({\sin^4x\over x^3}\right)\mathrm{d}x \leq {2\over \pi^2}.$$ I use Mathematica to calculate the integral $\int_0^{+\infty}\left({\sin^4x\over x^3}\right)\mathrm{d}x\simeq 0.7>{2\over\pi^2}$, hence my process (\ref{1}) seems to be wrong.
• I supposed $n$ positive integer. Is it right? – Raffaele Oct 3 '17 at 12:54
• @Raffaele Yes it is. – yahoo Oct 3 '17 at 13:01
The term $\left(\frac{\sin nx}{\sin x}\right)^4$ is associated with the Jackson kernel.
Your inequality is indeed just a minor variation on Lemma 0.5 in the linked notes, and it can be proved through the same technique: expand $|x|$ as a Fourier cosine series over $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$, do the same for $\left(\frac{\sin nx}{\sin x}\right)^4$, then apply orthogonality/Bessel's inequality.
• Great answer! The question is the first exercise of a chapter about integral, I thought it would be easy for me. By reading the material you give, I should estimate the $\int_0^{\pi\over 2n}$ part by using $\sin x\geq {2\over\pi}x$ and the rest using $x\geq {k\pi\over 2n}$. The first part it self still larger than the right hand side. So I need to give a more concise estimate rather than $\sin x\geq {2\over\pi} x$. By the way, if I accept the answer, how can I ask more people to find if there is a more simpler answer? – yahoo Oct 3 '17 at 14:38
| 909
| 2,274
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.1875
| 4
|
CC-MAIN-2019-30
|
latest
|
en
| 0.606609
|
# Prove $\int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x<{n^2\pi^2\over 8}$
Prove $$\int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x<{n^2\pi^2\over 8}.$$
My attempt: \begin{align} \int_0^{\pi/2} x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x & =\sum_{k=1}^n \int_{{k-1\over 2n}\pi}^{{k\over 2n}\pi}x\left({\sin nx\over \sin x}\right)^4\mathrm{d}x\\ & \leq\sum_{k=1}^n \left({\pi\over 2}\right)^4 \int_{{k-1\over 2n}\pi}^{{k\over 2n}\pi}\left({\sin^4nx\over x^3}\right)\mathrm{d}x \quad (\text{use } \sin x\geq {2\over \pi}x ) \tag{1}\label{1}\\ &= \left({\pi\over 2}\right)^4 n^2\sum_{k=1}^n \int_{{k-1\over 2}\pi}^{{k\over 2}\pi}\left({\sin^4x\over x^3}\right)\mathrm{d}x \quad (\text{use } x\to {x\over n}).\\ \end{align} Is my direction right? If right, how can I prove the following $$\sum_{k=1}^n \int_{{k-1\over 2}\pi}^{{k\over 2}\pi}\left({\sin^4x\over x^3}\right)\mathrm{d}x\leq\int_0^{+\infty}\left({\sin^4x\over x^3}\right)\mathrm{d}x \leq {2\over \pi^2}.$$ I use Mathematica to calculate the integral $\int_0^{+\infty}\left({\sin^4x\over x^3}\right)\mathrm{d}x\simeq 0.7>{2\over\pi^2}$, hence my process (\ref{1}) seems to be wrong. • I supposed $n$ positive integer. Is it right? – Raffaele Oct 3 '17 at 12:54
• @Raffaele Yes it is. – yahoo Oct 3 '17 at 13:01
The term $\left(\frac{\sin nx}{\sin x}\right)^4$ is associated with the Jackson kernel.
|
Your inequality is indeed just a minor variation on Lemma 0.5 in the linked notes, and it can be proved through the same technique: expand $|x|$ as a Fourier cosine series over $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$, do the same for $\left(\frac{\sin nx}{\sin x}\right)^4$, then apply orthogonality/Bessel's inequality.
|
https://cstheory.stackexchange.com/questions/46185/computing-3d-viewpoint-of-a-set-of-non-intersecting-segments
| 1,621,182,458,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00603.warc.gz
| 226,243,171
| 36,906
|
# Computing 3D viewpoint of a set of non-intersecting segments
Consider the following problem: we are given a finite set of bounded line-segments in $${\mathbb R}^3$$, and we want to decide whether there exists a point $$p\in {\mathbb R}^3$$ from which no two segments obscure one another.
Can this be done efficiently?
### Problem statement:
More precisely and formally: we are given $$n$$ line segments $$\ell_1,\ldots,\ell_n$$, where each segment is defined as $$\ell_i=\{tu_i+(1-t) v_i: t\in [0,1]\}$$ with $$u_i,v_i\in {\mathbb Q}^3$$ (we assume rational coordinates).
We wish to decide whether there exists a point $$p\in {\mathbb R}^3$$ such the lines connecting $$p$$ with each point on the lines are distinct, and if so, compute it.
Is there an efficient solution? Is there a hardness lower bound?
UPDATE: Given the lack of answers so far, what about the case where the line segments connect two adjacent point in a 3D $$k\times k\times k$$ grid ? Then, they are all parallel to some axis, they are all of length 1, etc. Does this make it significantly easier?
### Inefficient solution:
Observe that for each pair of lines $$\ell_1,\ell_2$$, the points from which the lines do obscure each other can be described as a polyhedron defined as the intersection of 4 half-spaces: for every 3 points in {u_1,v_1,u_2,v_2}, the hyperplane defined by them is a boundary such that on one side of it, the lines do not obscure each other.
Thus, we can represent the set of "bad points" (those from which at least one pair obscure each other) as a union of $$n^2$$ polyhedra (not necessarily disjoint). Then, all we need is to test it's complement for emptiness. This can be done e.g. using Fourier-Motzkin quantifier elimination, whose complexity is quite bad. On top of this, we first need to convert a CNF representation to DNF, which may involve an exponential blowup.
| 490
| 1,878
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.53125
| 4
|
CC-MAIN-2021-21
|
latest
|
en
| 0.908188
|
# Computing 3D viewpoint of a set of non-intersecting segments
Consider the following problem: we are given a finite set of bounded line-segments in $${\mathbb R}^3$$, and we want to decide whether there exists a point $$p\in {\mathbb R}^3$$ from which no two segments obscure one another. Can this be done efficiently? ### Problem statement:
More precisely and formally: we are given $$n$$ line segments $$\ell_1,\ldots,\ell_n$$, where each segment is defined as $$\ell_i=\{tu_i+(1-t) v_i: t\in [0,1]\}$$ with $$u_i,v_i\in {\mathbb Q}^3$$ (we assume rational coordinates). We wish to decide whether there exists a point $$p\in {\mathbb R}^3$$ such the lines connecting $$p$$ with each point on the lines are distinct, and if so, compute it. Is there an efficient solution? Is there a hardness lower bound? UPDATE: Given the lack of answers so far, what about the case where the line segments connect two adjacent point in a 3D $$k\times k\times k$$ grid ? Then, they are all parallel to some axis, they are all of length 1, etc. Does this make it significantly easier? ### Inefficient solution:
Observe that for each pair of lines $$\ell_1,\ell_2$$, the points from which the lines do obscure each other can be described as a polyhedron defined as the intersection of 4 half-spaces: for every 3 points in {u_1,v_1,u_2,v_2}, the hyperplane defined by them is a boundary such that on one side of it, the lines do not obscure each other.
|
Thus, we can represent the set of "bad points" (those from which at least one pair obscure each other) as a union of $$n^2$$ polyhedra (not necessarily disjoint).
|
https://math.stackexchange.com/questions/3186954/to-find-an-orthonormal-basis-for-the-row-space-of-a
| 1,566,754,839,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-35/segments/1566027330786.8/warc/CC-MAIN-20190825173827-20190825195827-00532.warc.gz
| 546,275,213
| 30,621
|
# To find an orthonormal basis for the row space of $A$.
To find an orthonormal basis for the row space of $$A = \begin{bmatrix} 2 & -1 & -3 \\ -5 & 5 & 3 \\ \end{bmatrix}$$.
Let $$v_1 = (2\ -1 \ -3)$$ and $$v_2 = (-5 \ \ \ 5 \ \ \ 3)$$.
Using the Gram-Schmidt Process, I found an orthonormal basis $$e_1 = \frac{1}{\sqrt{14}} (2\ -1 \ -3)$$ and $$e_2 = \frac{1}{\sqrt{5}} (-1 \ \ \ 2 \ \ \ 0)$$.
So an orthonormal basis for the row space of $$A =\{ e_1,e_2\}$$ .
IS the solution correct?
• Did you try checking if the two vectors you obtained are orthogonal (i.e. their dot product is $0$)? You should also probably show us the steps in your working, so we can see where you went wrong. – Minus One-Twelfth Apr 14 at 2:45
• Even more importantly, have you checked that $v_1$ and $v_2$ are actually elements of the row space? – amd Apr 14 at 3:31
## 1 Answer
Verify your Gram-Schmidt process again.
Note that we have $$V_1=X_1$$ and $$V_2 = X_2-\frac {X_2.V_1}{V_1.V_1}V_1$$
My calculations did not match with yours.
| 371
| 1,026
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.5
| 4
|
CC-MAIN-2019-35
|
latest
|
en
| 0.843742
|
# To find an orthonormal basis for the row space of $A$. To find an orthonormal basis for the row space of $$A = \begin{bmatrix} 2 & -1 & -3 \\ -5 & 5 & 3 \\ \end{bmatrix}$$. Let $$v_1 = (2\ -1 \ -3)$$ and $$v_2 = (-5 \ \ \ 5 \ \ \ 3)$$. Using the Gram-Schmidt Process, I found an orthonormal basis $$e_1 = \frac{1}{\sqrt{14}} (2\ -1 \ -3)$$ and $$e_2 = \frac{1}{\sqrt{5}} (-1 \ \ \ 2 \ \ \ 0)$$. So an orthonormal basis for the row space of $$A =\{ e_1,e_2\}$$ . IS the solution correct? • Did you try checking if the two vectors you obtained are orthogonal (i.e. their dot product is $0$)? You should also probably show us the steps in your working, so we can see where you went wrong. – Minus One-Twelfth Apr 14 at 2:45
• Even more importantly, have you checked that $v_1$ and $v_2$ are actually elements of the row space? – amd Apr 14 at 3:31
## 1 Answer
Verify your Gram-Schmidt process again.
|
Note that we have $$V_1=X_1$$ and $$V_2 = X_2-\frac {X_2.V_1}{V_1.V_1}V_1$$
My calculations did not match with yours.
|
https://math.stackexchange.com/questions/1827111/probability-with-coins
| 1,560,662,721,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-26/segments/1560627997731.69/warc/CC-MAIN-20190616042701-20190616064701-00038.warc.gz
| 527,510,068
| 33,314
|
Probability with coins
I'm self learning and I stumbled upon the following task, but I struggle to find the solution:
Two players flip coins. The first player flips 3 coins, the second player flips 2 coins. The player that gets most tales wins 5 coins. If both players get the same amount of tales, the game starts over.
1. What is the probability of the first player to win on the first attempt?
2. What is the probability of the first player to win the game?
3. How is the prize distributed?
My solution:
if H=heads, T=tails then on the first attempt the following outcomes are possible:
{(HHH, HH), (HHH, HT), (HHH, TH), (HHH, TT),
(HHT, HH), (HHT, HT), (HHT, TH), (HHT, TT),
(HTH, HH), (HTH, HT), (HTH, TH), (HTH, TT),
(THH, HH), (THH, HT), (THH, TH), (THH, TT),
(HTT, HH), (HTT, HT), (HTT, TH), (HTT, TT),
(THT, HH), (THT, HT), (THT, TH), (THT, TT),
(TTH, HH), (TTH, HT), (TTH, TH), (TTH, TT),
(TTT, HH), (TTT, HT), (TTT, TH), (TTT, TT)}
Total cases: 32; First player wins in 16; Second player in 6; Game is repeated in 10.
1. The probability of the first player to win the game on the first attempt is $\frac {16} {32} = \frac 12$.
2. The probability of the first player to win the game is $\frac {16}{32}\frac {10}{32} = \frac {5}{32}$ ??
I'm not very sure if the second is correct. Is it right to conclude that if the game is repeated $n$ times the chance of the first player to win is the same as if the game is repeated 1 time?
• You've got to be careful. For the first player, $HHT$ is a different throw from $HTH$, and they need to be separate entries. If they aren't treated as separate, then the probabilities aren't uniform, they go like this: $P(3H) = 1/8, P(2H) = 3/8, P(1H) = 3/8, P(0H) = 1/8$. – Arthur Jun 15 '16 at 12:27
• Thanks! Modified my question. The cases should be correct now, but is it so with my answer? – Ivan Prodanov Jun 15 '16 at 12:39
• You do not need to list all solutions. Player 1 and player 2 are independently distributed, meaning, the outcome of player 1 does not affect the probability of the outcomes of player 2, and conversely. Player 1 plays a Binomial distribution with $n=3$ attempts and probability of success $p=\frac{1}{2}$. Player 2 plays also a binomial distribution with $p = \frac{1}{2}$, but with $n=2$ attempts. – Lærne Jun 15 '16 at 12:46
• Using binomial distribution for the first answer looks interesting. So in order for the first player to win on the first trial it would be $\binom 3 3p^3(1-p)^0 + \binom 3 2p^2(1-p)^1(1-\binom 2 2p^2(1-p)^0) + \binom 3 1p(1-p)^2\binom 2 0p^0(1-p)^2$ – Ivan Prodanov Jun 15 '16 at 13:13
About the second part, you can think this way:
Firstly, in each trial the probability that the first player wins is $\frac{1}{2}$, as you have calculated. The probability of the second person to win a trial is $\frac{3}{16}$. The probability of a draw is $1-\frac{1}{2}-\frac{3}{16}=\frac{5}{16}$.
Having the probabilities for a single trial, the probability that the first person wins, in total, is calculated considering the probabilities of the following scenarios:
1- the first person wins in the first trial ($\frac{1}{2}$)
2- the first trial ends in a draw and in the second trial, the first person wins ($(\frac{5}{16})(\frac{1}{2})$)
3- in general, we need to have $n$ draws and one win (for the first person) at the end, which happens with the probability $(\frac{5}{16})^n(\frac{1}{2})$
Since the mentioned scenarios are disjoint, they can be added up to give the final answer
$\frac{1}{2}\sum_{i=0}(\frac{5}{16})^i=\frac{1}{2}\frac{1}{1-\frac{5}{16}}=\frac{8}{11}$
For the third part, I think it should be noted what prize distribution is.
• Thanks! The third part I believe is related to probability distribution. Any clue on this one? – Ivan Prodanov Jun 15 '16 at 13:26
• We need to have a random variable defined first, so we can calculate the probability distribution. So, the distribution of prize is not well defined. – Med Jun 15 '16 at 13:36
| 1,245
| 3,977
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.125
| 4
|
CC-MAIN-2019-26
|
latest
|
en
| 0.937384
|
Probability with coins
I'm self learning and I stumbled upon the following task, but I struggle to find the solution:
Two players flip coins. The first player flips 3 coins, the second player flips 2 coins. The player that gets most tales wins 5 coins. If both players get the same amount of tales, the game starts over. 1. What is the probability of the first player to win on the first attempt? 2. What is the probability of the first player to win the game? 3. How is the prize distributed? My solution:
if H=heads, T=tails then on the first attempt the following outcomes are possible:
{(HHH, HH), (HHH, HT), (HHH, TH), (HHH, TT),
(HHT, HH), (HHT, HT), (HHT, TH), (HHT, TT),
(HTH, HH), (HTH, HT), (HTH, TH), (HTH, TT),
(THH, HH), (THH, HT), (THH, TH), (THH, TT),
(HTT, HH), (HTT, HT), (HTT, TH), (HTT, TT),
(THT, HH), (THT, HT), (THT, TH), (THT, TT),
(TTH, HH), (TTH, HT), (TTH, TH), (TTH, TT),
(TTT, HH), (TTT, HT), (TTT, TH), (TTT, TT)}
Total cases: 32; First player wins in 16; Second player in 6; Game is repeated in 10. 1. The probability of the first player to win the game on the first attempt is $\frac {16} {32} = \frac 12$. 2. The probability of the first player to win the game is $\frac {16}{32}\frac {10}{32} = \frac {5}{32}$ ? ? I'm not very sure if the second is correct. Is it right to conclude that if the game is repeated $n$ times the chance of the first player to win is the same as if the game is repeated 1 time? • You've got to be careful. For the first player, $HHT$ is a different throw from $HTH$, and they need to be separate entries. If they aren't treated as separate, then the probabilities aren't uniform, they go like this: $P(3H) = 1/8, P(2H) = 3/8, P(1H) = 3/8, P(0H) = 1/8$. – Arthur Jun 15 '16 at 12:27
• Thanks! Modified my question. The cases should be correct now, but is it so with my answer? – Ivan Prodanov Jun 15 '16 at 12:39
• You do not need to list all solutions. Player 1 and player 2 are independently distributed, meaning, the outcome of player 1 does not affect the probability of the outcomes of player 2, and conversely. Player 1 plays a Binomial distribution with $n=3$ attempts and probability of success $p=\frac{1}{2}$. Player 2 plays also a binomial distribution with $p = \frac{1}{2}$, but with $n=2$ attempts. – Lærne Jun 15 '16 at 12:46
• Using binomial distribution for the first answer looks interesting. So in order for the first player to win on the first trial it would be $\binom 3 3p^3(1-p)^0 + \binom 3 2p^2(1-p)^1(1-\binom 2 2p^2(1-p)^0) + \binom 3 1p(1-p)^2\binom 2 0p^0(1-p)^2$ – Ivan Prodanov Jun 15 '16 at 13:13
About the second part, you can think this way:
Firstly, in each trial the probability that the first player wins is $\frac{1}{2}$, as you have calculated. The probability of the second person to win a trial is $\frac{3}{16}$. The probability of a draw is $1-\frac{1}{2}-\frac{3}{16}=\frac{5}{16}$.
|
Having the probabilities for a single trial, the probability that the first person wins, in total, is calculated considering the probabilities of the following scenarios:
1- the first person wins in the first trial ($\frac{1}{2}$)
2- the first trial ends in a draw and in the second trial, the first person wins ($(\frac{5}{16})(\frac{1}{2})$)
3- in general, we need to have $n$ draws and one win (for the first person) at the end, which happens with the probability $(\frac{5}{16})^n(\frac{1}{2})$
Since the mentioned scenarios are disjoint, they can be added up to give the final answer
$\frac{1}{2}\sum_{i=0}(\frac{5}{16})^i=\frac{1}{2}\frac{1}{1-\frac{5}{16}}=\frac{8}{11}$
For the third part, I think it should be noted what prize distribution is.
|
https://math.stackexchange.com/questions/3087270/proof-verification-the-orthogonal-complement-of-the-column-space-is-the-left-nu
| 1,620,285,752,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00489.warc.gz
| 417,044,126
| 38,592
|
# Proof Verification: the orthogonal complement of the column space is the left nullspace
Can someone please check my proof and my definitions.
Let $$A \in \mathbb{R}^{n \times m}$$ be my matrix.
The left null space of $$A$$ is written as,
$$\mathcal{N}(A^\top) = \{x \in \mathbb{R}^n| A^\top x = 0\}$$
The orthogonal complement of the column space $$\mathcal{C}(A)$$ is written as,
$$\mathcal{C}(A)^\perp = \{x \in \mathbb{R}^n | x^\top y = 0, \forall y \in \mathcal{C}(A)\}$$
We want to show that $$\mathcal{N}(A^\top) = \mathcal{C}(A)^\perp$$
First, we show, $$\mathcal{N}(A^\top) \subseteq \mathcal{C}(A)^\perp$$
Let $$x \in \mathcal{N}(A^\top)$$, then $$A^\top x = 0 \implies x^\top A = 0^\top \implies x^\top Av= 0^\top v, \forall v \in \mathcal{C}(A) \implies x^\top y = 0 , y = Av$$, $$\implies x \in C(A)^\perp$$.
Next, we show, $$\mathcal{N}(A^\top) \supseteq \mathcal{C}(A)^\perp$$
Let $$x \in C(A)^\perp$$, then $$x^\top y = 0$$, forall $$y \in C(A)$$. But $$y = Av, \forall v \in \mathbb{R}^n$$. Hence, $$x^\top y = x^\top Av = v^\top A^\top x.$$ For all $$v \neq 0, A^\top x = 0$$, hence $$x \in \mathcal{N}(A^\top)$$.
I'm pretty confident about the first proof. But the second proof is a bit more rough. Can someone please check for me.
$$y \in C(A)$$ means that there exists (at least one) $$v$$ of appropriate dimension such that $$y = Av$$.
So we can say: For $$x \in C(A)^{\perp}$$, then $$x^T y = 0$$ for every $$y \in C(A)$$. For every $$y \in C(A)$$, we can express $$y = Av$$ for some (nonzero) $$v$$. So we can always express $$x^T y$$ as $$x^T Av$$. So $$x^T y = x^T (A v) = (x^T A) v = (A^T x)^T v = 0^T v = 0$$ for $$v \neq 0$$, so we must have $$A^T x = 0$$, i.e., $$x \in N(A^T)$$.
| 694
| 1,724
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 33, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.953125
| 4
|
CC-MAIN-2021-21
|
latest
|
en
| 0.551844
|
# Proof Verification: the orthogonal complement of the column space is the left nullspace
Can someone please check my proof and my definitions. Let $$A \in \mathbb{R}^{n \times m}$$ be my matrix. The left null space of $$A$$ is written as,
$$\mathcal{N}(A^\top) = \{x \in \mathbb{R}^n| A^\top x = 0\}$$
The orthogonal complement of the column space $$\mathcal{C}(A)$$ is written as,
$$\mathcal{C}(A)^\perp = \{x \in \mathbb{R}^n | x^\top y = 0, \forall y \in \mathcal{C}(A)\}$$
We want to show that $$\mathcal{N}(A^\top) = \mathcal{C}(A)^\perp$$
First, we show, $$\mathcal{N}(A^\top) \subseteq \mathcal{C}(A)^\perp$$
Let $$x \in \mathcal{N}(A^\top)$$, then $$A^\top x = 0 \implies x^\top A = 0^\top \implies x^\top Av= 0^\top v, \forall v \in \mathcal{C}(A) \implies x^\top y = 0 , y = Av$$, $$\implies x \in C(A)^\perp$$. Next, we show, $$\mathcal{N}(A^\top) \supseteq \mathcal{C}(A)^\perp$$
Let $$x \in C(A)^\perp$$, then $$x^\top y = 0$$, forall $$y \in C(A)$$. But $$y = Av, \forall v \in \mathbb{R}^n$$. Hence, $$x^\top y = x^\top Av = v^\top A^\top x.$$ For all $$v \neq 0, A^\top x = 0$$, hence $$x \in \mathcal{N}(A^\top)$$. I'm pretty confident about the first proof. But the second proof is a bit more rough. Can someone please check for me. $$y \in C(A)$$ means that there exists (at least one) $$v$$ of appropriate dimension such that $$y = Av$$. So we can say: For $$x \in C(A)^{\perp}$$, then $$x^T y = 0$$ for every $$y \in C(A)$$. For every $$y \in C(A)$$, we can express $$y = Av$$ for some (nonzero) $$v$$. So we can always express $$x^T y$$ as $$x^T Av$$.
|
So $$x^T y = x^T (A v) = (x^T A) v = (A^T x)^T v = 0^T v = 0$$ for $$v \neq 0$$, so we must have $$A^T x = 0$$, i.e., $$x \in N(A^T)$$.
|
https://math.stackexchange.com/questions/874631/finding-cut-off-point-for-utility-function/874967
| 1,718,852,005,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198861880.60/warc/CC-MAIN-20240620011821-20240620041821-00426.warc.gz
| 335,602,211
| 36,859
|
# Finding cut-off point for utility function
OK, so apologies for the easy question, but I'm new to this! This is somewhere between elementary algebra, and beginner's game theory. The question comes from a paper I read here (see p. 193): http://home.uchicago.edu/~sashwort/valence.pdf
The following is a utility function for an individual comparing two alternatives (call them L and R). The individual, $i$, prefers L to R when:
$V_L - (x^* - x_L)^2 > V_R - (x^* - x_R)^2$
So far so good. The difficulty I'm having is figuring out how we can get from here to a cutoff rule, such that $i$ will prefer L if and only if:
$x^* < \hat{x}(x_L,x_R,v_L,v_R)$
The paper says that this can be accomplished via "straightforward algebra" to reach:
$\hat{x}(x_L,x_R,v_L,v_R) = \frac{1}{2}(x_R + x_L) + \frac{V_L - V_R}{2(X_R-X_L)}$
Sadly, for me, this algebra ain't so straightforward. If anyone could walk me through the steps to reach this point (or point out how I should approach this) that'd be great. Of course, in the SO tradition, anything more general that can help make this question more applicable to others is also very welcome.
Thanks!
--
EDIT: posted this q this morning, and have had some views but no nibbles... anyone got any suggestions? Thanks so much!
• 1. Expand both sides. 2. Cancel the $(x^*)^2$ that appears on both sides. 3. Solve for $x^*$. 4. Simplify, remembering that $(x_L^2-x_R^2)=(x_L+x_R)(x_L-x_R)$. 5. Drop the "algebraic-geometry" tag! :) Commented Jul 22, 2014 at 16:49
• Thanks so much - that's really great! Commented Jul 22, 2014 at 17:20
Just to avoid cumbersome effects, use $x_*$ instead of $x^*$.
Then we have (step by step)
$V_R - (x_* - x_R)^2 < V_L - (x_* - x_L)^2$,
$V_R - x_{*}^2 - x_{R}^2 + 2x_* x_R < V_L - x_{*}^2 - x_{L}^2 + 2x_* x_L$,
$V_R - x_{R}^2 + 2x_* x_R < V_L - x_{L}^2 + 2x_* x_L$,
$2x_* x_R - 2x_* x_L + x_{L}^2 - x_{R}^2 < V_L - V_R$,
$2x_* ( x_R - x_L) < (V_L - V_R) + (x_{R}^2 - x_{L}^2)$,
$x_* < \frac{(V_L - V_R)}{2 ( x_R - x_L)} + \frac{1}{2}(x_{R} + x_{L})$.
As somebody suggested, drop the algebraic topology tag. ;)
I hope it helps!
• Many thanks for this! It's great. Just one thing: in the last step, when you divide $(V_R - V_R)$ by $2(X_R - X_L)$, why is the other term $(X_R^2 - X_L^2)$ not also divided by $(X_R - X_L)$? I understand the $\frac{1}{2}$ part but don't understand where the other bit goes! Apologies for confusion. Many thanks. Commented Jul 22, 2014 at 17:23
• Set $x_R = a$ and $x_L =b$. Then you have $\frac{a^2 - b^2}{2(a-b)}$. But this is nothing more than $\frac{(a - b)(a+b)}{2(a-b)}$, and you simplify to get $\frac{(a+b)}{2}$. Commented Jul 22, 2014 at 17:36
• Ah I see. In which case, I think the last term in the last line ought to be $\frac{1}{2}(x_R + x_L)$ i.e., without the square term on $x_R$ and $x_L$? Commented Jul 22, 2014 at 22:35
• Indeed, I corrected the typo. Commented Jul 23, 2014 at 6:23
• Great, many thanks again. Commented Jul 23, 2014 at 8:35
| 1,035
| 2,971
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.984375
| 4
|
CC-MAIN-2024-26
|
latest
|
en
| 0.849944
|
# Finding cut-off point for utility function
OK, so apologies for the easy question, but I'm new to this! This is somewhere between elementary algebra, and beginner's game theory. The question comes from a paper I read here (see p. 193): http://home.uchicago.edu/~sashwort/valence.pdf
The following is a utility function for an individual comparing two alternatives (call them L and R). The individual, $i$, prefers L to R when:
$V_L - (x^* - x_L)^2 > V_R - (x^* - x_R)^2$
So far so good. The difficulty I'm having is figuring out how we can get from here to a cutoff rule, such that $i$ will prefer L if and only if:
$x^* < \hat{x}(x_L,x_R,v_L,v_R)$
The paper says that this can be accomplished via "straightforward algebra" to reach:
$\hat{x}(x_L,x_R,v_L,v_R) = \frac{1}{2}(x_R + x_L) + \frac{V_L - V_R}{2(X_R-X_L)}$
Sadly, for me, this algebra ain't so straightforward. If anyone could walk me through the steps to reach this point (or point out how I should approach this) that'd be great. Of course, in the SO tradition, anything more general that can help make this question more applicable to others is also very welcome. Thanks! --
EDIT: posted this q this morning, and have had some views but no nibbles... anyone got any suggestions? Thanks so much! • 1. Expand both sides. 2. Cancel the $(x^*)^2$ that appears on both sides. 3. Solve for $x^*$. 4. Simplify, remembering that $(x_L^2-x_R^2)=(x_L+x_R)(x_L-x_R)$. 5. Drop the "algebraic-geometry" tag! :) Commented Jul 22, 2014 at 16:49
• Thanks so much - that's really great! Commented Jul 22, 2014 at 17:20
Just to avoid cumbersome effects, use $x_*$ instead of $x^*$. Then we have (step by step)
$V_R - (x_* - x_R)^2 < V_L - (x_* - x_L)^2$,
$V_R - x_{*}^2 - x_{R}^2 + 2x_* x_R < V_L - x_{*}^2 - x_{L}^2 + 2x_* x_L$,
$V_R - x_{R}^2 + 2x_* x_R < V_L - x_{L}^2 + 2x_* x_L$,
$2x_* x_R - 2x_* x_L + x_{L}^2 - x_{R}^2 < V_L - V_R$,
$2x_* ( x_R - x_L) < (V_L - V_R) + (x_{R}^2 - x_{L}^2)$,
$x_* < \frac{(V_L - V_R)}{2 ( x_R - x_L)} + \frac{1}{2}(x_{R} + x_{L})$. As somebody suggested, drop the algebraic topology tag. ;)
I hope it helps! • Many thanks for this! It's great. Just one thing: in the last step, when you divide $(V_R - V_R)$ by $2(X_R - X_L)$, why is the other term $(X_R^2 - X_L^2)$ not also divided by $(X_R - X_L)$? I understand the $\frac{1}{2}$ part but don't understand where the other bit goes! Apologies for confusion. Many thanks.
|
Commented Jul 22, 2014 at 17:23
• Set $x_R = a$ and $x_L =b$.
|
https://dsp.stackexchange.com/questions/54772/system-function-h-omega-relationship-to-odd-and-even-components-of-hn
| 1,631,972,780,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-39/segments/1631780056476.66/warc/CC-MAIN-20210918123546-20210918153546-00451.warc.gz
| 284,420,026
| 38,467
|
# system function $H(\omega)$ relationship to odd and even components of h[n]
What qualities of $$h[n]$$ are necessary for:
$$H(e^{j\omega}) = DTFT\{h_{even}[n]\} + j\ DTFT\{h_{odd}[n]\}$$
Do all real / causal h[n] have the property that:
$$H(e^{j\omega}) = DTFT\{h_{even}[n]\} + j\ DTFT\{h_{odd}[n]\}$$
where:
$$h_{even}[n] = \frac{1}{2}(h[n] + h[-n])$$
$$h_{odd}[n] = \frac{1}{2}(h[n] - h[-n])$$
The DTFT relationships
$$x_{even}[n]=\frac12\left(x[n]+x^*[-n]\right)\Longleftrightarrow\textrm{Re}\left\{X(e^{j\omega})\right\}$$
and
$$x_{odd}[n]=\frac12\left(x[n]-x^*[-n]\right)\Longleftrightarrow j\,\textrm{Im}\left\{X(e^{j\omega})\right\}$$
hold for any sequence $$x[n]$$ for which the DTFT exists. There is no assumption about $$x[n]$$ being real-valued or causal (note the complex conjugation $$^*$$ in the definition of even and odd signals). If $$x[n]$$ is real-valued you can leave out the conjugation.
Note that the DTFT of the odd part $$x_{odd}[n]$$ equals $$j$$ times the imaginary part of the DTFT $$X(e^{j\omega})$$, so you have
$$X(e^{j\omega})=\textrm{DTFT}\{x_{even}[n]\}+\textrm{DTFT}\{x_{odd}[n]\}$$
(without a $$j$$ on the right-hand side).
• thanks, makes sense now. any suggestion for title? Jan 12 '19 at 15:55
• @MrCasuality: If your question has been answered you can accept this answer by clicking on the green check mark to its left, thanks. Jan 12 '19 at 17:07
| 501
| 1,404
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.578125
| 4
|
CC-MAIN-2021-39
|
latest
|
en
| 0.733083
|
# system function $H(\omega)$ relationship to odd and even components of h[n]
What qualities of $$h[n]$$ are necessary for:
$$H(e^{j\omega}) = DTFT\{h_{even}[n]\} + j\ DTFT\{h_{odd}[n]\}$$
Do all real / causal h[n] have the property that:
$$H(e^{j\omega}) = DTFT\{h_{even}[n]\} + j\ DTFT\{h_{odd}[n]\}$$
where:
$$h_{even}[n] = \frac{1}{2}(h[n] + h[-n])$$
$$h_{odd}[n] = \frac{1}{2}(h[n] - h[-n])$$
The DTFT relationships
$$x_{even}[n]=\frac12\left(x[n]+x^*[-n]\right)\Longleftrightarrow\textrm{Re}\left\{X(e^{j\omega})\right\}$$
and
$$x_{odd}[n]=\frac12\left(x[n]-x^*[-n]\right)\Longleftrightarrow j\,\textrm{Im}\left\{X(e^{j\omega})\right\}$$
hold for any sequence $$x[n]$$ for which the DTFT exists. There is no assumption about $$x[n]$$ being real-valued or causal (note the complex conjugation $$^*$$ in the definition of even and odd signals). If $$x[n]$$ is real-valued you can leave out the conjugation.
|
Note that the DTFT of the odd part $$x_{odd}[n]$$ equals $$j$$ times the imaginary part of the DTFT $$X(e^{j\omega})$$, so you have
$$X(e^{j\omega})=\textrm{DTFT}\{x_{even}[n]\}+\textrm{DTFT}\{x_{odd}[n]\}$$
(without a $$j$$ on the right-hand side).
|
http://math.stackexchange.com/questions/99199/solution-of-fredholm-integral-equation-of-the-first-kind-with-symmetric-rational
| 1,419,124,574,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2014-52/segments/1418802770554.119/warc/CC-MAIN-20141217075250-00095-ip-10-231-17-201.ec2.internal.warc.gz
| 181,670,018
| 16,170
|
solution of Fredholm integral equation of the first kind with symmetric rational kernel
How can be solved this Fredholm first kind integral equation: $$f(x)=\frac{1}{\pi}\int_{0}^{\infty}\frac{g(y)}{x+y}dy$$
-
The equation
$$f(x)=\frac{1}{\pi}\int_{0}^{\infty}\frac{g(y)}{x+y}\mathrm{d}y$$
has solution
\begin{align} y(x) &= \frac{1}{2 i} \lim_{\epsilon \to 0^+} \left\{f(-x-i\epsilon)-f(-x+i\epsilon)\right\} \\ &= \frac{1}{\sqrt{x}} \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k)!} \left(\frac{\pi}{x} \frac{\mathrm{d}}{\mathrm{d}x}\right)^{2k} \left\{\sqrt{x}f(x)\right\}. \end{align}
Source: Polyanin and Manzhirov, Handbook of Integral Equations, section 3.1-3, #17.
Numerous other sources are cited below the entry there.
-
You could try a Mellin transform. Since $\int _{0}^{\infty }\!{\frac {{x}^{s-1}}{x+y}}{dx}={y}^{s-1}\pi \,\csc \left( \pi \,s \right)$ for $y > 0$ and $0 < \Re s < 1$, the Mellin transforms of $f$ and $g$ satisfy $Mf(s) = \csc(\pi s) Mg(s)$ for $0 < \Re s < 1$. You might then try inverting $Mg(s)$ using the inversion formula
$$g(y) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} Mf(s) \sin(\pi s)\ ds$$
where $0 < c < 1$, under appropriate convergence assumptions.
-
| 472
| 1,204
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.671875
| 4
|
CC-MAIN-2014-52
|
latest
|
en
| 0.553492
|
solution of Fredholm integral equation of the first kind with symmetric rational kernel
How can be solved this Fredholm first kind integral equation: $$f(x)=\frac{1}{\pi}\int_{0}^{\infty}\frac{g(y)}{x+y}dy$$
-
The equation
$$f(x)=\frac{1}{\pi}\int_{0}^{\infty}\frac{g(y)}{x+y}\mathrm{d}y$$
has solution
\begin{align} y(x) &= \frac{1}{2 i} \lim_{\epsilon \to 0^+} \left\{f(-x-i\epsilon)-f(-x+i\epsilon)\right\} \\ &= \frac{1}{\sqrt{x}} \sum_{k=0}^{\infty} \frac{(-1)^k}{(2k)!} \left(\frac{\pi}{x} \frac{\mathrm{d}}{\mathrm{d}x}\right)^{2k} \left\{\sqrt{x}f(x)\right\}. \end{align}
Source: Polyanin and Manzhirov, Handbook of Integral Equations, section 3.1-3, #17. Numerous other sources are cited below the entry there. -
You could try a Mellin transform. Since $\int _{0}^{\infty }\! {\frac {{x}^{s-1}}{x+y}}{dx}={y}^{s-1}\pi \,\csc \left( \pi \,s \right)$ for $y > 0$ and $0 < \Re s < 1$, the Mellin transforms of $f$ and $g$ satisfy $Mf(s) = \csc(\pi s) Mg(s)$ for $0 < \Re s < 1$.
|
You might then try inverting $Mg(s)$ using the inversion formula
$$g(y) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} Mf(s) \sin(\pi s)\ ds$$
where $0 < c < 1$, under appropriate convergence assumptions.
|
http://math.stackexchange.com/questions/435026/algebraic-divison
| 1,469,759,211,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257829325.58/warc/CC-MAIN-20160723071029-00053-ip-10-185-27-174.ec2.internal.warc.gz
| 157,137,543
| 17,591
|
# Algebraic Divison
Is there a way to break the left hand side expression such that it takes the the right hand side form?
$(a+b)/(c+d)=a/c+b/d+k$
Where $k$ is some expression.
-
Yes, and that expression would be $(a+b)/(c+d) - a/c - b/d$. Are you looking for something less stupid or more specific? – Patrick Da Silva Jul 3 '13 at 3:17
Solve for $k$, as Patrick indicated: \begin{align} k&=\frac{a+b}{c+d}-\frac{a}{c}-\frac{b}{d}\\ &=\frac{cd(a+b)-ad(c+d)-bc(c+d)}{cd(c+d)}\\ &=\frac{acd+bcd-acd-ad^2-bc^2-bcd}{cd(c+d)}\\ &=\frac{-ad^2-bc^2}{cd(c+d)} \end{align} In the words of lots of movie cops over the years, "Move along, folks, there's nothing to see here."
@jessica: one additional thing to note is that you need $c$ and $d$ non-zero. – James Jul 3 '13 at 13:21
@James: and also $c\ne-d$, else the original expression is undefined. – Rick Decker Jul 3 '13 at 13:42
Here's an old chestnut related to your problem. Take $64/16$, cancel the 6s, and you get $4/1$ which happens to be the right answer. Too bad it doesn't work in general. – Rick Decker Jul 4 '13 at 14:05
| 366
| 1,079
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.09375
| 4
|
CC-MAIN-2016-30
|
latest
|
en
| 0.823834
|
# Algebraic Divison
Is there a way to break the left hand side expression such that it takes the the right hand side form? $(a+b)/(c+d)=a/c+b/d+k$
Where $k$ is some expression. -
Yes, and that expression would be $(a+b)/(c+d) - a/c - b/d$. Are you looking for something less stupid or more specific?
|
– Patrick Da Silva Jul 3 '13 at 3:17
Solve for $k$, as Patrick indicated: \begin{align} k&=\frac{a+b}{c+d}-\frac{a}{c}-\frac{b}{d}\\ &=\frac{cd(a+b)-ad(c+d)-bc(c+d)}{cd(c+d)}\\ &=\frac{acd+bcd-acd-ad^2-bc^2-bcd}{cd(c+d)}\\ &=\frac{-ad^2-bc^2}{cd(c+d)} \end{align} In the words of lots of movie cops over the years, "Move along, folks, there's nothing to see here."
|
https://math.stackexchange.com/questions/2025090/finding-the-area-bounded-by-two-curves
| 1,571,601,951,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-43/segments/1570986718918.77/warc/CC-MAIN-20191020183709-20191020211209-00157.warc.gz
| 602,732,631
| 32,525
|
# Finding the area bounded by two curves
Find the area of the region bounded by the parabola $$y = 4x^2$$, the tangent line to this parabola at $$(2, 16)$$, and the $$x$$-axis.
I found the tangent line to be $$y=16x-16$$ and set up the integral from $$0$$ to $$2$$ of $$4x^2-16x+16$$ with respect to $$x$$, which is the top function when looking at the graph minus the bottom function. I took the integral and came up with $$\frac{4}{3}x^3-8x^2+16x$$ evaluated between $$0$$ and $$2$$. This came out to be $$\frac{32}{3}$$ but this was the incorrect answer. Can anyone tell me where I went wrong?
Hint: After drawing it, note that you have to calculate $\int_0^1 4x^2\;dx + \int_1^2 4x^2-16x+16\;dx$.
• I got $\frac{8}{3}$. I'm sorry but did you do it right? – Rodrigo Dias Nov 22 '16 at 0:22
• Any time! ${}{}$ – Rodrigo Dias Nov 22 '16 at 0:28
The tangent crosses the $x$ axis at $x=1$, so your integral is including (with the plus sign) also the triangle made by the tangent below the $x$ axis.
The correct way is to integrate only the parabola for $x=0 \cdots 2$ (which is $32/3$ and then subtract the area of the triangle$(1,0),(2,16),(2,0)$, which is $8$, so the net area is $8/3$ .
| 406
| 1,191
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.953125
| 4
|
CC-MAIN-2019-43
|
latest
|
en
| 0.923894
|
# Finding the area bounded by two curves
Find the area of the region bounded by the parabola $$y = 4x^2$$, the tangent line to this parabola at $$(2, 16)$$, and the $$x$$-axis. I found the tangent line to be $$y=16x-16$$ and set up the integral from $$0$$ to $$2$$ of $$4x^2-16x+16$$ with respect to $$x$$, which is the top function when looking at the graph minus the bottom function. I took the integral and came up with $$\frac{4}{3}x^3-8x^2+16x$$ evaluated between $$0$$ and $$2$$. This came out to be $$\frac{32}{3}$$ but this was the incorrect answer. Can anyone tell me where I went wrong? Hint: After drawing it, note that you have to calculate $\int_0^1 4x^2\;dx + \int_1^2 4x^2-16x+16\;dx$. • I got $\frac{8}{3}$. I'm sorry but did you do it right? – Rodrigo Dias Nov 22 '16 at 0:22
• Any time! ${}{}$ – Rodrigo Dias Nov 22 '16 at 0:28
The tangent crosses the $x$ axis at $x=1$, so your integral is including (with the plus sign) also the triangle made by the tangent below the $x$ axis.
|
The correct way is to integrate only the parabola for $x=0 \cdots 2$ (which is $32/3$ and then subtract the area of the triangle$(1,0),(2,16),(2,0)$, which is $8$, so the net area is $8/3$ .
|
https://cs.stackexchange.com/questions/59453/why-is-b-tree-search-olog-n
| 1,624,268,340,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00595.warc.gz
| 176,416,005
| 41,657
|
# Why is b-tree search O(log n)?
B-tree is a data structure, which looks like this:
If I want to look for some specific value in this structure, I need to go through several elements in root to find the right child-node. The I need to go through several elements in the child-node to find its right child-node etc.
The point is, when I have $n$ elements in every node, then I have to go through all of them in the worst case. So, we have $O(n)$ complexity for searching in one node.
Then, we must go through all the levels of the structure, and they're $log_m N$ of them, $m$ being the order of B-tree and $N$ the number of all elements in the tree. So here, we have $O(log N)$ complexity in the worst case.
Putting these information together, we should have $O(n) * O(log n) = O(n * log n)$ complexity.
But the complexity is just $O(log n)$ - why? What am I doing wrong?
• Little n and big N are not the same, this is your error – Kurt Mueller Jun 9 '16 at 16:14
You have introduced $$n$$ and $$m$$ as the order of B-tree, I will stick to $$m$$.
Their height will be in the best case $$\lceil log_m(N + 1) \rceil$$, and the worst case is height $$\lceil log_{\frac{m}{2}}(N)\rceil$$ but there is also a saturation factor $$d$$, that you have not mentioned.
The height will be $$O(log N)$$, please notice that $$m$$ disappeared, because it effectively is multiplication by a constant.
Now at every node you have at most $$m$$ sorted elements, so you can perform binary search giving $$log_2(m)$$, so the proper complexity is $$O(log(N) * log(m))$$.
Since $$m << N$$, and what is more important, is that it does not depend on $$N$$, so it should not be mixed, or it might be given explicitly (with $$m$$ not $$N$$ or appearing $$n$$).
• There are cases where binary search is not practicable: for instance when each node of the tree contains variable-length strings rather than fixed-length data. The complexity is then indeed $O(m\log{N})$ rather than $O(\log{m}\log{N})$, but as you point out, $m$ is a constant which does not depend on the number of elements in the tree, so it drops out of consideration either way. – Martin Kochanski Jun 9 '16 at 16:43
• Yes, you are right. – Evil Jun 9 '16 at 17:01
• So, if I understand it well, $m$ is considered a constant, because it's a "firm" input, meanwhile $N$ is not constant, because I can insert/delete elements during the algorithm? – Eenoku Jun 10 '16 at 12:17
• Yes, it is let me say construction time constant. Tree with such properties can grow ($N$ can increase as long as you wish or until runs out of memory) without changing the structure. If you populare your tree changing $m$ would require changing all nodes (possible but this is not intended operation on B-tree). – Evil Jun 10 '16 at 12:55
Considering this as an order $m$ B-Tree, whether or not you take $m$ to be a constant, worst case search takes $\Theta(\lg N)$ total comparisons ($N$ values total). As is stated in another answer (as a newbie, I cannot comment on it yet), the height of the tree is about $\log_m N = (\lg N)/(\lg m)$. Especially if you are taking $m$ to be variable, it is assumed that you will have a logarithmic search per node, order $O(\lg m)$. Multiplying those terms, $\log_m N \cdot \lg m = ((\lg N) / (\lg m)) \cdot \lg m = \lg N$, you don't have to drop the $\lg m$ term using big-O, they really do cancel.
For most (but not all) analysis on external memory algorithms, page size is not treated as a constant. It isn't wrong to do so, but generally gives more information if you don't. One of the difficult things about external memory algorithms is that you are generally trying to optimize (at least) two different things at once: overall operations, and page accesses, which are so inefficient that you might want to minimize them even if it meant paying some extra in other operations. A B-Tree is so elegant because even when you consider page size as a variable, it is asymptotically optimal for operations on comparison based structures, and simultaneously optimizes for page accesses, $O(\lg_m N)$ per search. Notice how uninteresting that last fact becomes if we just consider $m$ as a constant: of course for a $O(\lg N)$ operation search, it would use $O(\lg N)$ page references. $O(\lg_m N)$ is much more informative.
The point is, when I have n elements in every node, then I have to go through all of them in the worst case. So, we have O(n) complexity for searching in one node.
No. You would do a binary search in the node, so the complexity of searching in a node is $O(log n)$, not $O(n)$.
If you have n elements in every node, that means the number of total elements are exponential to n.In complexity analysis n is your total number of elements in the whole tree, so if your tree is balanced there is no way that you would have n elements in any node.
Assume your tree in your question has 4 elements in every node. That means you have 16 nodes in total and in worst case you have to search through root and the node with the searched element ,which makes 4 elements in total, so your total N=16 and in your worst case you still inspect 4 elements, still O(logN).
You can have the worst case complexity O(n) if
1) the number of keys per node is unlimited, all the keys end up in one node and for some reason the tree is not rebalanced, and
2) the keys in one node are accessed sequentially, and not in some more efficient way.
That would be a terrible way to implement a B-tree, and even in this case, it's still only the worst case complexity. You are partially right though ;-)
• The worst case scenario is supposed to be quite rare. – Irina Rapoport Jun 9 '16 at 23:29
• It is quite rare to make unlimited B-tree, and then it is not fully functional, because you will not use more then one node. – Evil Jun 9 '16 at 23:43
• "It is quite rare to make unlimited B-tree" - I did say it's a bad implementation, "and then it is not fully functional, because you will not use more then one node" - worst cases rarely are. Think the worst case of a hash table, is it very functional? – Irina Rapoport Jun 9 '16 at 23:52
• And I did put a smiley ;-) – Irina Rapoport Jun 9 '16 at 23:52
| 1,594
| 6,169
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.84375
| 4
|
CC-MAIN-2021-25
|
longest
|
en
| 0.920255
|
# Why is b-tree search O(log n)? B-tree is a data structure, which looks like this:
If I want to look for some specific value in this structure, I need to go through several elements in root to find the right child-node. The I need to go through several elements in the child-node to find its right child-node etc. The point is, when I have $n$ elements in every node, then I have to go through all of them in the worst case. So, we have $O(n)$ complexity for searching in one node. Then, we must go through all the levels of the structure, and they're $log_m N$ of them, $m$ being the order of B-tree and $N$ the number of all elements in the tree. So here, we have $O(log N)$ complexity in the worst case. Putting these information together, we should have $O(n) * O(log n) = O(n * log n)$ complexity. But the complexity is just $O(log n)$ - why? What am I doing wrong? • Little n and big N are not the same, this is your error – Kurt Mueller Jun 9 '16 at 16:14
You have introduced $$n$$ and $$m$$ as the order of B-tree, I will stick to $$m$$. Their height will be in the best case $$\lceil log_m(N + 1) \rceil$$, and the worst case is height $$\lceil log_{\frac{m}{2}}(N)\rceil$$ but there is also a saturation factor $$d$$, that you have not mentioned. The height will be $$O(log N)$$, please notice that $$m$$ disappeared, because it effectively is multiplication by a constant. Now at every node you have at most $$m$$ sorted elements, so you can perform binary search giving $$log_2(m)$$, so the proper complexity is $$O(log(N) * log(m))$$. Since $$m << N$$, and what is more important, is that it does not depend on $$N$$, so it should not be mixed, or it might be given explicitly (with $$m$$ not $$N$$ or appearing $$n$$). • There are cases where binary search is not practicable: for instance when each node of the tree contains variable-length strings rather than fixed-length data. The complexity is then indeed $O(m\log{N})$ rather than $O(\log{m}\log{N})$, but as you point out, $m$ is a constant which does not depend on the number of elements in the tree, so it drops out of consideration either way. – Martin Kochanski Jun 9 '16 at 16:43
• Yes, you are right. – Evil Jun 9 '16 at 17:01
• So, if I understand it well, $m$ is considered a constant, because it's a "firm" input, meanwhile $N$ is not constant, because I can insert/delete elements during the algorithm? – Eenoku Jun 10 '16 at 12:17
• Yes, it is let me say construction time constant. Tree with such properties can grow ($N$ can increase as long as you wish or until runs out of memory) without changing the structure. If you populare your tree changing $m$ would require changing all nodes (possible but this is not intended operation on B-tree). – Evil Jun 10 '16 at 12:55
Considering this as an order $m$ B-Tree, whether or not you take $m$ to be a constant, worst case search takes $\Theta(\lg N)$ total comparisons ($N$ values total). As is stated in another answer (as a newbie, I cannot comment on it yet), the height of the tree is about $\log_m N = (\lg N)/(\lg m)$. Especially if you are taking $m$ to be variable, it is assumed that you will have a logarithmic search per node, order $O(\lg m)$.
|
Multiplying those terms, $\log_m N \cdot \lg m = ((\lg N) / (\lg m)) \cdot \lg m = \lg N$, you don't have to drop the $\lg m$ term using big-O, they really do cancel.
|
https://electronics.stackexchange.com/questions/57861/maximum-power-for-arduino-monster-moto-shield
| 1,558,912,454,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-22/segments/1558232260161.91/warc/CC-MAIN-20190526225545-20190527011545-00192.warc.gz
| 453,836,575
| 37,071
|
# Maximum power for Arduino Monster Moto Shield
I'm reading the specs for the SparkFun Monster Moto shield which specifies that
• Max Voltage is 16V
• Maximum Current 30A
Given Ohm's law, does that mean that the maximum power is 480 watts? That seems like a lot!!!
Am I missing something? Please do excuse my ignorance, I'm only starting out with robots and electronics and still at the beginning of the All About Circuits book.
The board is rated for a peak current of 30A, not continuous driver current of 30A. The continuous drive current spec'd is 14A.
In section 4 of the datasheet for the motor driver, it has the package and thermal dissipation information for the driver.
Figure 40 shows the junction-ambient thermal resistance of the board in natural convection (i.e. no fans). The board itself has roughly an area of $3.5cm \cdot 6cm = 21cm^2$, which is shared between the two motor drivers. For the sake of simplicity, let's assume that we only have 1 driver running, and we only get an effective ~$15cm^2$ of PCB heat sinking (close to the max values shown in the plot). At this level we have the following junction-ambient thermal resistances:
$R_{thHS} = 28 \frac{C}{W}\\ R_{thLS} = 26 \frac{C}{W}\\ R_{thHSLS} = R_{thLSLS} = 7.5 \frac{C}{W}$
The temperature rises above ambient is then given in table 15. For this example, let's assume we're driving HSA and LSB, and we're analyzing $T_{jHSAB}$ (junction temperature rise of the high side gates).
$T_{jHSAB} = P_{HS} \cdot R_{thHS} + P_{LS} \cdot R_{thHSLS} + T_{amb}$
Now let's refer to the electrical characteristics of device. The MOSFET gates can be modeled as resistors when on, with the following resistance values:
$R_{HS} = 28 m\Omega\\ R_{LS} = 10 m\Omega\\$
The power dissipation of a resistor given a current:
$P = I^2 \cdot R$
So re-writing the junction temperature rise equation, we get:
$T_{jHSAB} = I^2 \cdot R_{HS} \cdot R_{thHS} + I^2 \cdot R_{LS} \cdot R_{thHSLS} + T_{amb}$
Plotting this vs. current, we get:
At ~12A of continuous drive we've exceeded the allowable thermal junction of the chip. At this current, A rough heat dissipation calculation for the driver chip is:
$P_{d} = I^2 \cdot R_{HS} + I^2 \cdot R_{LS} = 5.47W$
In addition to these calculation, DC motors don't have a constant current consumption. Rather, as the motor gets faster it generates a back EMF which will decrease the current flowing through the motor until the motor reaches the no load speed and consumes near 0 current. Maximum current is consumed at stall (reason why stalling DC motors is bad). Maximum mechanical power occurs at half the no-load speed.
So let's assume we're driving a motor with 16V and we want maximum mechanical power. Let's say for sake of argument this results in 12A flowing through the circuit. At half speed we get an 8V back EMF, resulting in a maximum mechanical power of (assuming 100% efficient motor):
$P_M = (16V - 8V) \cdot 12A = 96W$
So as you can see the mechanical motor power is significantly higher than the heat losses of the motor driver.
• Hm, I looked at the datasheet, and I can't find where it lists Peak or Continuous current ratings. It lists Imax=30A as continuous under Absolute Max, and the very first page lists 30A as it's featured rating as well. The datasheet's specifications are all done at IOUT = 15A, not 14. But I'm only seeing 30A as the continuous, and nothing listed for a peak or pulsed. – Passerby Feb 13 '13 at 3:31
• On the sparkfun page they list 14A max continuous drive for the board. It's more of a recommendation, though and from my experience I would treat that value with some suspicion. Probably best to stick under that at least. – helloworld922 Feb 13 '13 at 3:33
• Ah, the "practical current". Yea, someone made a comment about the eagle files being iffy and someone else about how the copper weight needs to be a higher 6oz for proper heat dispersal to handle a high current rate. – Passerby Feb 13 '13 at 4:09
• @helloworld922 I will need to do some further reading on the information you've provided because as mentioned in my OP I am still very much a beginner :) However, thank you for the elaborate answer it deserves my vote and tick no doubt. Cheers – Marko Feb 13 '13 at 7:25
Yes it's a lot (but motors an take that or more). No, the board can't handle it without adding heat sinks and cooling. The average use case for these is more like 12V/6A or 72 watts (Stall current). Design goals should always be to (reasonably) over-design, for protection. You don't want to run parts at their maximum capacity, for safety, longevity, and ease of expandability.
• Thanks @Passerby, so how did you come up with the 12V/6A figures? The power you suggest seems to only be 15% of the maximum capacity. – Marko Feb 13 '13 at 3:11
• Oh sorry. Just based on Sparkfun's target audience, the comments and everything. 12v 2A motors are standard hobbyist parts. (Plus stall current is higher). Biggest thing is that without heatsinks, more than a few continuous amps will quickly cause heat issues. The output current is dependent on the junction temperature, if you can't keep it cool, it won't work well. – Passerby Feb 13 '13 at 3:27
• Not sure who downvoted you but you have my +1. Thank you for your answer. – Marko Feb 13 '13 at 23:13
• @Marko, appreciate it. – Passerby Feb 13 '13 at 23:26
• Twasn't me, but the motor driver almost definitely cannot handle 72W of heat dissipation. As I demonstrated in my post the amount of power dissipated by the driver should be small compared to the amount of mechanical power output from the motor. If it wasn't, that would be one lousy driver. This is probably the biggest reason motors are driven with PWM instead of using a linear driver. – helloworld922 Feb 15 '13 at 3:18
| 1,530
| 5,781
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.75
| 4
|
CC-MAIN-2019-22
|
latest
|
en
| 0.864165
|
# Maximum power for Arduino Monster Moto Shield
I'm reading the specs for the SparkFun Monster Moto shield which specifies that
• Max Voltage is 16V
• Maximum Current 30A
Given Ohm's law, does that mean that the maximum power is 480 watts? That seems like a lot!!! Am I missing something? Please do excuse my ignorance, I'm only starting out with robots and electronics and still at the beginning of the All About Circuits book. The board is rated for a peak current of 30A, not continuous driver current of 30A. The continuous drive current spec'd is 14A. In section 4 of the datasheet for the motor driver, it has the package and thermal dissipation information for the driver. Figure 40 shows the junction-ambient thermal resistance of the board in natural convection (i.e. no fans). The board itself has roughly an area of $3.5cm \cdot 6cm = 21cm^2$, which is shared between the two motor drivers. For the sake of simplicity, let's assume that we only have 1 driver running, and we only get an effective ~$15cm^2$ of PCB heat sinking (close to the max values shown in the plot). At this level we have the following junction-ambient thermal resistances:
$R_{thHS} = 28 \frac{C}{W}\\ R_{thLS} = 26 \frac{C}{W}\\ R_{thHSLS} = R_{thLSLS} = 7.5 \frac{C}{W}$
The temperature rises above ambient is then given in table 15. For this example, let's assume we're driving HSA and LSB, and we're analyzing $T_{jHSAB}$ (junction temperature rise of the high side gates). $T_{jHSAB} = P_{HS} \cdot R_{thHS} + P_{LS} \cdot R_{thHSLS} + T_{amb}$
Now let's refer to the electrical characteristics of device. The MOSFET gates can be modeled as resistors when on, with the following resistance values:
$R_{HS} = 28 m\Omega\\ R_{LS} = 10 m\Omega\\$
The power dissipation of a resistor given a current:
$P = I^2 \cdot R$
So re-writing the junction temperature rise equation, we get:
$T_{jHSAB} = I^2 \cdot R_{HS} \cdot R_{thHS} + I^2 \cdot R_{LS} \cdot R_{thHSLS} + T_{amb}$
Plotting this vs. current, we get:
At ~12A of continuous drive we've exceeded the allowable thermal junction of the chip. At this current, A rough heat dissipation calculation for the driver chip is:
$P_{d} = I^2 \cdot R_{HS} + I^2 \cdot R_{LS} = 5.47W$
In addition to these calculation, DC motors don't have a constant current consumption. Rather, as the motor gets faster it generates a back EMF which will decrease the current flowing through the motor until the motor reaches the no load speed and consumes near 0 current. Maximum current is consumed at stall (reason why stalling DC motors is bad). Maximum mechanical power occurs at half the no-load speed. So let's assume we're driving a motor with 16V and we want maximum mechanical power. Let's say for sake of argument this results in 12A flowing through the circuit.
|
At half speed we get an 8V back EMF, resulting in a maximum mechanical power of (assuming 100% efficient motor):
$P_M = (16V - 8V) \cdot 12A = 96W$
So as you can see the mechanical motor power is significantly higher than the heat losses of the motor driver.
|
http://mathematica.stackexchange.com/tags/arithmetic/hot?filter=year
| 1,469,301,750,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257823387.9/warc/CC-MAIN-20160723071023-00153-ip-10-185-27-174.ec2.internal.warc.gz
| 154,545,468
| 12,143
|
# Tag Info
6
I feel like I should be prefacing this answer with three confessions, considering that this is an arithmetic question. First, I had a hard time with the multiplication tables until I was nine years old. Second, even after I finally got the hang of multiplication, I was never a fan of multiplying from right-to-left; I preferred going left-to-right. (Arthur ...
6
It's not a bug and it's not so uncommon. For an explanation have a look here. This and some related issues also appear in this MathGroup thread. Also relevant: 1 2.
3
They are not identical computations. With the first form, (mu/2 gt).gt Mathematica can take advantage of vector arithmetic, usually going through specialized routines like LAPACK. The second form, Sum[(mu[[i]]/2 gt[[i]]) gt[[i]], {i, Length@mu}] however, will usually be calculated term by term because there is a possibility that the input can change ...
2
If I understand your question, there is no need of Mathematica to solve your problem for $x_0$ given, we know that $y_{n+1} = y_{n} + 5$. We know also that $y_0 y_1 = 12500$. That is to say that $y_0 y_1 = y_0 (y_0 + 5) = y_0^2 + 5 y_0 = 12500$. You can ask Mathematica to solve this second order equation to obtain $x_0 = 109.331$ --- there is also a ...
1
At the moment, this is just some random thoughts and observations. I will try to morph it into a coherent answer, soon. First, a determinant can be reasonably calculated using LUDecomposition, e.g. Clear[ludet]; ludet[nn_] := ludet[nn] = Block[{u, s1}, u = First@LUDecomposition@Table[s1[i1, i2], {i1, 1, nn}, {i2, 1, nn}]; Times @@ Diagonal[u ...
1
You have quite a small data set, so a really inefficient brute force search will still run pretty fast (<1 sec on my computer). I stress that this is STUPID way to do it, and with list manipulation you can surely make it MUCH more efficient. But as I said - it works. First, transform the data so that you could retrieve the data by calling f[x1,x2,x3], ...
Only top voted, non community-wiki answers of a minimum length are eligible
| 554
| 2,056
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.53125
| 4
|
CC-MAIN-2016-30
|
latest
|
en
| 0.939679
|
# Tag Info
6
I feel like I should be prefacing this answer with three confessions, considering that this is an arithmetic question. First, I had a hard time with the multiplication tables until I was nine years old. Second, even after I finally got the hang of multiplication, I was never a fan of multiplying from right-to-left; I preferred going left-to-right. (Arthur ...
6
It's not a bug and it's not so uncommon. For an explanation have a look here. This and some related issues also appear in this MathGroup thread. Also relevant: 1 2. 3
They are not identical computations. With the first form, (mu/2 gt).gt Mathematica can take advantage of vector arithmetic, usually going through specialized routines like LAPACK. The second form, Sum[(mu[[i]]/2 gt[[i]]) gt[[i]], {i, Length@mu}] however, will usually be calculated term by term because there is a possibility that the input can change ...
2
If I understand your question, there is no need of Mathematica to solve your problem for $x_0$ given, we know that $y_{n+1} = y_{n} + 5$. We know also that $y_0 y_1 = 12500$. That is to say that $y_0 y_1 = y_0 (y_0 + 5) = y_0^2 + 5 y_0 = 12500$.
|
You can ask Mathematica to solve this second order equation to obtain $x_0 = 109.331$ --- there is also a ...
1
At the moment, this is just some random thoughts and observations.
|
https://physics.stackexchange.com/questions/733546/wave-eigenfunction-and-eigenvalue-for-step-potential
| 1,702,080,764,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-50/segments/1700679100779.51/warc/CC-MAIN-20231208212357-20231209002357-00219.warc.gz
| 514,777,228
| 41,513
|
# Wave eigenfunction and eigenvalue for step potential
Given the Schrödinger equation:
$$-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2} + V(x)\psi = E\psi$$
where:
$$\left\{ \begin{array}{l} V(x) = V_0 \text{ for }x>a \\ V(x) = 0 \text{ for } 0\leq x \leq a \\ V(x) = \infty \text{ for } x<0 \end{array}\right.$$
and $$V_0 > E$$. Solving the schrodinger equation we get for $$x\leq 0$$:
$$\psi(x) = N_1\sin\left(\sqrt{\frac{2mE}{\hbar^2}}x+\phi\right)$$
And for $$x>0$$:
$$\psi(x) = N_2\exp\left({-\sqrt{\frac{2m(V_0-E)}{\hbar^2}}x}\right)$$
Where I neglected the other term $$\exp\left({\sqrt{\frac{2m(V_0-E)}{\hbar^2}}x}\right)$$ because the wave function should be normalizable.
The thing is, because we want $$\psi(0)=0$$, we fix $$\phi = 0$$. We are left with 2 Unknowns, while we have 3 conditions left. We want $$\psi$$ to be continuous at $$a$$ , We want $$\psi'(x)$$ to be continuous at $$a$$, and finally we want it to be normalized.
How is this possible?
You don't have to take account of normalization condition because if you have
$$\begin{cases} \psi_<(a)=\psi_>(a) \\ \psi_{<}^{'}(a)=\psi_{>}^{'}(a)\\ \psi(0)=0\\ \end{cases}$$
so it's easy to check that if you define $$\tilde{\psi}= \frac{\psi}{\int \psi^2 dx}$$ you obtain
$$\begin{cases} \tilde{\psi}_<(a)=\tilde{\psi}_>(a) \\ \tilde{\psi}_{<}^{'}(a)=\tilde{\psi}_{>}^{'}(a)\\ \tilde{\psi}(0)=0\\ \end{cases}$$
We are left with 2 Unknowns, while we have 3 conditions left. We want $$\psi$$ to be continuous at $$a$$, we want $$\psi′(x)$$ to be continuous at $$a$$, and finally we want it to be normalized.
How is this possible?
You are right, for most values of $$E$$ there is no solution. However, for some special values of $$E$$ there is a solution. And these are the eigenvalues you are looking for.
| 649
| 1,798
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.6875
| 4
|
CC-MAIN-2023-50
|
longest
|
en
| 0.799716
|
# Wave eigenfunction and eigenvalue for step potential
Given the Schrödinger equation:
$$-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2} + V(x)\psi = E\psi$$
where:
$$\left\{ \begin{array}{l} V(x) = V_0 \text{ for }x>a \\ V(x) = 0 \text{ for } 0\leq x \leq a \\ V(x) = \infty \text{ for } x<0 \end{array}\right.$$
and $$V_0 > E$$. Solving the schrodinger equation we get for $$x\leq 0$$:
$$\psi(x) = N_1\sin\left(\sqrt{\frac{2mE}{\hbar^2}}x+\phi\right)$$
And for $$x>0$$:
$$\psi(x) = N_2\exp\left({-\sqrt{\frac{2m(V_0-E)}{\hbar^2}}x}\right)$$
Where I neglected the other term $$\exp\left({\sqrt{\frac{2m(V_0-E)}{\hbar^2}}x}\right)$$ because the wave function should be normalizable. The thing is, because we want $$\psi(0)=0$$, we fix $$\phi = 0$$. We are left with 2 Unknowns, while we have 3 conditions left. We want $$\psi$$ to be continuous at $$a$$ , We want $$\psi'(x)$$ to be continuous at $$a$$, and finally we want it to be normalized. How is this possible? You don't have to take account of normalization condition because if you have
$$\begin{cases} \psi_<(a)=\psi_>(a) \\ \psi_{<}^{'}(a)=\psi_{>}^{'}(a)\\ \psi(0)=0\\ \end{cases}$$
so it's easy to check that if you define $$\tilde{\psi}= \frac{\psi}{\int \psi^2 dx}$$ you obtain
$$\begin{cases} \tilde{\psi}_<(a)=\tilde{\psi}_>(a) \\ \tilde{\psi}_{<}^{'}(a)=\tilde{\psi}_{>}^{'}(a)\\ \tilde{\psi}(0)=0\\ \end{cases}$$
We are left with 2 Unknowns, while we have 3 conditions left.
|
We want $$\psi$$ to be continuous at $$a$$, we want $$\psi′(x)$$ to be continuous at $$a$$, and finally we want it to be normalized.
|
https://math.stackexchange.com/questions/1735139/how-to-reduce-into-canonical-form/1735960
| 1,611,815,740,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-04/segments/1610704835901.90/warc/CC-MAIN-20210128040619-20210128070619-00139.warc.gz
| 445,229,400
| 32,455
|
# How to reduce into canonical form
Determine the type of the following equation and reduce the PDE to its canonical form $u_{xx} + 4u_{xy} + 4u_{yy} + u = 0$.
We consider pdes in the form $$a_{11}(x,y)u_{xx}+2a_{12}(x,y)u_{xy}+a_{22}(x,y)u_{yy} +F(x,y,u,u_x,u_y)=0$$
Since in our case $a_{12}^2-a_{11}a_{22}=0$, we have that it is parabolic.
Then I think we find $$\frac{dy}{dx}=\frac{a_{12} \pm \sqrt{a_{12}^2-a_{11}a_{22}}}{a_{11}}=2$$ But then what?
Define $\eta \left( x,y \right)=y-2x,\text{ }$ and choose $\xi \left( x,\text{ }y \right)=x$ such that the Jacobian $J:={{\xi }_{x}}{{\eta }_{y}}-{{\xi }_{y}}{{\eta }_{x}}$ does not vanish. Let $v\left( \xi ,\eta \right)=u\left( x,y \right).$ Substituting the new coordinates $\xi$ and $\eta$ into the given equation, we obtain
$$\left( {{v}_{\xi \xi }}-4{{v}_{\xi \eta }}+4{{v}_{\eta \eta }} \right)+4\left( {{v}_{\xi \eta }}-2{{v}_{\eta \eta }} \right)+4{{v}_{\eta \eta }}+v=0\text{ }.$$ Thus,
$${{v}_{\xi \xi }}+v=0,$$ and this is the desired canonical form.
| 416
| 1,020
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.1875
| 4
|
CC-MAIN-2021-04
|
latest
|
en
| 0.685246
|
# How to reduce into canonical form
Determine the type of the following equation and reduce the PDE to its canonical form $u_{xx} + 4u_{xy} + 4u_{yy} + u = 0$. We consider pdes in the form $$a_{11}(x,y)u_{xx}+2a_{12}(x,y)u_{xy}+a_{22}(x,y)u_{yy} +F(x,y,u,u_x,u_y)=0$$
Since in our case $a_{12}^2-a_{11}a_{22}=0$, we have that it is parabolic. Then I think we find $$\frac{dy}{dx}=\frac{a_{12} \pm \sqrt{a_{12}^2-a_{11}a_{22}}}{a_{11}}=2$$ But then what? Define $\eta \left( x,y \right)=y-2x,\text{ }$ and choose $\xi \left( x,\text{ }y \right)=x$ such that the Jacobian $J:={{\xi }_{x}}{{\eta }_{y}}-{{\xi }_{y}}{{\eta }_{x}}$ does not vanish.
|
Let $v\left( \xi ,\eta \right)=u\left( x,y \right).$ Substituting the new coordinates $\xi$ and $\eta$ into the given equation, we obtain
$$\left( {{v}_{\xi \xi }}-4{{v}_{\xi \eta }}+4{{v}_{\eta \eta }} \right)+4\left( {{v}_{\xi \eta }}-2{{v}_{\eta \eta }} \right)+4{{v}_{\eta \eta }}+v=0\text{ }.$$ Thus,
$${{v}_{\xi \xi }}+v=0,$$ and this is the desired canonical form.
|
https://cs.stackexchange.com/questions/48256/proof-that-a-given-language-is-not-context-free
| 1,713,441,448,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-18/segments/1712296817206.28/warc/CC-MAIN-20240418093630-20240418123630-00514.warc.gz
| 166,228,339
| 41,683
|
# Proof that a given language is not context-free
Given the language $L = \{w \in \{a,b\}^* \, | \, |w| = n \cdot \sqrt{n} \text{ and } n \geq 42\}$ and the assignement to proof that $L \notin CFL$ with the Pumping lemma.
Assuming $L \in CFL$, would it be possible to start with defining a language $L' := L \cap a^+$ which has to be context-free since $CFL$ is closed under intersection with $REG$. Now I would have to proof that $L' = \{w \in a^+ \, | \, |w| = n \cdot \sqrt{n} \text{ and } n \geq 42\}$ isn't regular because the alphabet contains only one symbol.
Let $k$ be the constant of the Pumping lemma and $m > k$ and $m > 42$. So $z = a^{m^2\cdot\sqrt{m^2}} = a^{m^3} \in L'$.
$|z| = |uvw| = m^3 ...$
How to continue?
You're off to a good start. You recognize that all you have to do is show that the language $L'$ isn't regular. You let $k$ be the integer of the PL and choose an integer $m$ with $m>k$ and $m>42$ so you choose to pump the string $z=a^{m^3}$. Write this as $uvw$ with $|v|=t$ and $0<t<k$. Now we'll have $|uv^2w|=m^3+t<m^3+m$. This string can't be in $L'$ since it's strictly smaller than the next largest string in $L'$, namely the one with length $(m+1)^3$, since obviously $$m^3+m<m^3+3m^2+3m+1$$ Since $L'$ isn't regular, $L$ can't be a CFL.
Use the pumping lemma for CFLs. Let N be the pumping lemma's constant, and take $\sigma = a^{N^{3/2}}$, which certainly has $\lvert \sigma \rvert \ge N$. So we can write $\sigma = v w x y z$ with $\lvert w x y \rvert \le N$ and $w y \ne \epsilon$ such that for all $k \ge 0$ we have $v w^k x y^k z \in L$.
But here only lengths matter. Call $u = \lvert w y \rvert$, so that the length of the pumped string is:
\begin{align} \lvert v w^k x y^k z \rvert = N^{3/2} + (k - 1) u \end{align}
where we know that $u \le N$.
Now look for $n$ such that $(n + 1)^{3/2} - n^{3/2} > N$. By the binomial theorem:
\begin{align} (n + 1)^{3/2} - n^{3/2} &= n^{3/2} \left( (1 + 1/n)^{3/2} - 1 \right) \\ &= n^{3/2} \left( 1 + \frac{3}{2 n} + \dotsb - 1 \right) \\ &\ge \frac{3 \sqrt{n}}{2} \\ & > N \end{align}
This is $n > 4 N^2 / 9$. This provides a stride larger than $N$ between lengths of strings in the language, the pumped string will fall short.
For any context-free language $L$, the set $S$ of lengths of words in $L$ is eventually periodic, that is there exists an $m>0$ such that $x \in S$ iff $x + m \in S$ (this is one form of Parikh's theorem). In your case, if $L$ were linear then $S = \{ k^3 : k^2 \geq 42 \}$ would be eventually periodic. However, an eventually periodic set is either finite or has positive density, whereas this set $S$ is neither.
A more general form of Parikh's theorem states that the set of histograms of words in a context-free language (the histogram of a word counts how many times each symbol appears in it) is semi-linear, which is a union of linear sets, a linear set being a set of the form $\{ x + n_1 y_1 + \cdots + n_r y_r : n_1,\ldots,n_r \in \mathbb{N} \}$, where $x,y_1,\ldots,y_r \in \mathbb{N}^\Sigma$. This is useful for proving that languages defined using more complicated constraints are not context-free.
• ... and particularly useful for languages over a one-symbol alphabet. Oct 14, 2015 at 23:51
| 1,097
| 3,232
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.96875
| 4
|
CC-MAIN-2024-18
|
latest
|
en
| 0.826754
|
# Proof that a given language is not context-free
Given the language $L = \{w \in \{a,b\}^* \, | \, |w| = n \cdot \sqrt{n} \text{ and } n \geq 42\}$ and the assignement to proof that $L \notin CFL$ with the Pumping lemma. Assuming $L \in CFL$, would it be possible to start with defining a language $L' := L \cap a^+$ which has to be context-free since $CFL$ is closed under intersection with $REG$. Now I would have to proof that $L' = \{w \in a^+ \, | \, |w| = n \cdot \sqrt{n} \text{ and } n \geq 42\}$ isn't regular because the alphabet contains only one symbol. Let $k$ be the constant of the Pumping lemma and $m > k$ and $m > 42$. So $z = a^{m^2\cdot\sqrt{m^2}} = a^{m^3} \in L'$. $|z| = |uvw| = m^3 ...$
How to continue? You're off to a good start. You recognize that all you have to do is show that the language $L'$ isn't regular. You let $k$ be the integer of the PL and choose an integer $m$ with $m>k$ and $m>42$ so you choose to pump the string $z=a^{m^3}$. Write this as $uvw$ with $|v|=t$ and $0<t<k$. Now we'll have $|uv^2w|=m^3+t<m^3+m$. This string can't be in $L'$ since it's strictly smaller than the next largest string in $L'$, namely the one with length $(m+1)^3$, since obviously $$m^3+m<m^3+3m^2+3m+1$$ Since $L'$ isn't regular, $L$ can't be a CFL. Use the pumping lemma for CFLs. Let N be the pumping lemma's constant, and take $\sigma = a^{N^{3/2}}$, which certainly has $\lvert \sigma \rvert \ge N$. So we can write $\sigma = v w x y z$ with $\lvert w x y \rvert \le N$ and $w y \ne \epsilon$ such that for all $k \ge 0$ we have $v w^k x y^k z \in L$. But here only lengths matter. Call $u = \lvert w y \rvert$, so that the length of the pumped string is:
\begin{align} \lvert v w^k x y^k z \rvert = N^{3/2} + (k - 1) u \end{align}
where we know that $u \le N$. Now look for $n$ such that $(n + 1)^{3/2} - n^{3/2} > N$. By the binomial theorem:
\begin{align} (n + 1)^{3/2} - n^{3/2} &= n^{3/2} \left( (1 + 1/n)^{3/2} - 1 \right) \\ &= n^{3/2} \left( 1 + \frac{3}{2 n} + \dotsb - 1 \right) \\ &\ge \frac{3 \sqrt{n}}{2} \\ & > N \end{align}
This is $n > 4 N^2 / 9$. This provides a stride larger than $N$ between lengths of strings in the language, the pumped string will fall short. For any context-free language $L$, the set $S$ of lengths of words in $L$ is eventually periodic, that is there exists an $m>0$ such that $x \in S$ iff $x + m \in S$ (this is one form of Parikh's theorem).
|
In your case, if $L$ were linear then $S = \{ k^3 : k^2 \geq 42 \}$ would be eventually periodic.
|
https://math.stackexchange.com/questions/1653694/proving-that-the-sequence-converges
| 1,569,014,038,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-39/segments/1568514574077.39/warc/CC-MAIN-20190920200607-20190920222607-00316.warc.gz
| 568,076,825
| 30,380
|
# Proving that the sequence converges
I would like some help with the following problem. Thanks for any help in advance.
Let $(x_n)$ and $(y_n)$ be convergent sequences of positive real numbers. Let $x_n \xrightarrow[n \to \infty]{} x$ and $y_n \xrightarrow[n \to \infty]{} y$ and suppose that $x > 0$. Prove that the sequence $(x_nn^4 + y_nn^2)^{1/2} - x_n^{1/2}n^2$, $n \geq 1$ converges.
• The universal procedure, multiply the top and missing bottom by $(x_n n^4+y_n^2)^{1/2} +x_n^{1/2}n^2$. – André Nicolas Feb 13 '16 at 21:23
$\begin{array}\\ (x_nn^4 + y_nn^2)^{1/2} - x_n^{1/2}n^2 &=((x_nn^4 + y_nn^2)^{1/2} - x_n^{1/2}n^2) \dfrac{(x_nn^4 + y_nn^2)^{1/2} + x_n^{1/2}n^2}{(x_nn^4 + y_nn^2)^{1/2} + x_n^{1/2}n^2}\\ &=\dfrac{(x_nn^4 + y_nn^2)- x_nn^4}{(x_nn^4 + y_nn^2)^{1/2} + x_n^{1/2}n^2}\\ &=\dfrac{ y_nn^2}{(x_nn^4 + y_nn^2)^{1/2} + x_n^{1/2}n^2}\\ &=\dfrac{ y_n}{(x_n + y_n/n^2)^{1/2} + x_n^{1/2}}\\ \end{array}$
| 432
| 926
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.1875
| 4
|
CC-MAIN-2019-39
|
latest
|
en
| 0.472632
|
# Proving that the sequence converges
I would like some help with the following problem. Thanks for any help in advance. Let $(x_n)$ and $(y_n)$ be convergent sequences of positive real numbers. Let $x_n \xrightarrow[n \to \infty]{} x$ and $y_n \xrightarrow[n \to \infty]{} y$ and suppose that $x > 0$. Prove that the sequence $(x_nn^4 + y_nn^2)^{1/2} - x_n^{1/2}n^2$, $n \geq 1$ converges. • The universal procedure, multiply the top and missing bottom by $(x_n n^4+y_n^2)^{1/2} +x_n^{1/2}n^2$.
|
– André Nicolas Feb 13 '16 at 21:23
$\begin{array}\\ (x_nn^4 + y_nn^2)^{1/2} - x_n^{1/2}n^2 &=((x_nn^4 + y_nn^2)^{1/2} - x_n^{1/2}n^2) \dfrac{(x_nn^4 + y_nn^2)^{1/2} + x_n^{1/2}n^2}{(x_nn^4 + y_nn^2)^{1/2} + x_n^{1/2}n^2}\\ &=\dfrac{(x_nn^4 + y_nn^2)- x_nn^4}{(x_nn^4 + y_nn^2)^{1/2} + x_n^{1/2}n^2}\\ &=\dfrac{ y_nn^2}{(x_nn^4 + y_nn^2)^{1/2} + x_n^{1/2}n^2}\\ &=\dfrac{ y_n}{(x_n + y_n/n^2)^{1/2} + x_n^{1/2}}\\ \end{array}$
|
https://stats.stackexchange.com/questions/133124/fitting-a-quadratic-through-5-points-goal-is-to-find-the-maximum
| 1,713,051,741,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-18/segments/1712296816853.44/warc/CC-MAIN-20240413211215-20240414001215-00653.warc.gz
| 492,789,173
| 41,866
|
# Fitting a quadratic through 5 points, goal is to find the maximum
I have some physical experiments done at various locations. The locations produces a set of observations y for one value of x, the independent variable. In the end across a set of locations I have values in the following form
[y11, y21, y31, y41, y51, y61...] for one value of x, say x1
Then I repeat the experiment and get a new set of values
[y21, y22, y32, y42, y52, y62...] for a different value of x say x2
Ans so on.
In the end I have readings for y for 5 distinct values of x, [x1, x2, x3, x4, x5]
I wish to fit a quadratic to this data, and my main goal is to find the value of x, for which y is maximum. One way to do this is to define an average y for each x1 and fit a quadratic with the averaged out values. I know that fitting a quadratic to just 5 points is not a good idea. I am open to other ideas, that can help me solve the problem without directly fitting a functional relationship.
Some non-parametric idea for instance. One idea I have is to do some distribution analysis on values of y for x1, vs values of y for x2 and so on. This enables me to take all the y values for one x1, without averaging them. I would be open to other ideas and suggestions.
• "without directly fitting a functional relationship" - why don't you want to fit this? (I assume you have good reasons for suspecting a quadratic relationship, right?) Jan 12, 2015 at 15:23
• Yes, I do. Well, primarily because I think 5 points is too little to infer the curvature and the slope etc. I guess what I mean to say is to find out a way to "not loose" the information by virtue of averaging the values of y, while fitting the quadratic.
– gbh.
Jan 12, 2015 at 15:26
• What causes variation in the measurment x within each location? Is it measurment error or somethong else? Jan 12, 2015 at 15:36
• Locational effects primarily plus yes some measurement error.
– gbh.
Jan 12, 2015 at 15:37
You are right that inferring a parabola from five points is too little data. But you have more than five points, namely all your measurements! Don't do any averaging (though this would already help), just fit the parabola to all your data. Model fitting doesn't mind multiple x values.
Let's do this in R. Some dummy data over five different x values:
set.seed(1)
xx <- rep(1:5,each=10)
yy <- -xx^2+6*xx-5+rnorm(length(xx),0,1)
Now we can fit the model. Note the I() to protect your square term, and note that this really presupposes homoskedastic errors:
model <- lm(yy~xx+I(xx^2))
Now we have a quadratic relationship. Some elementary calculus gives us the (estimated) x coordinate of maximum of the fitted parabola:
xx.max <- -coef(model)[2]/(2*coef(model)[3])
It's always a good idea to get an idea about the variability of our results. It may be possibly to derive a confidence interval for xx.max analytically, but the bootstrap is always easier, and given enough data (50 points should be enough), it should be valid:
require(boot)
foo <- boot(data=data.frame(xx=xx,yy=yy),statistic=function(data,indices){
model <- lm(yy~xx+I(xx^2),data[indices,])
-coef(model)[2]/(2*coef(model)[3])},
strata=xx,
R=1000)
Note that I am doing a stratified bootstrap, i.e., I am sampling with replacement within each x value, which makes sense here, since the data really are stratified.
So we can plot our points, the fitted parabola and the x coordinate of the estimated maximum together with the bootstrapped confidence interval:
plot(xx,yy,pch=19)
rect(quantile(foo$t,0.025),-2,quantile(foo$t,0.975),6,border=NA,col="lightgrey")
points(xx,yy,pch=19)
xx.plot <- seq(1,5,by=.01)
lines(xx.plot,predict(model,newdata=data.frame(xx=xx.plot)))
abline(v=xx.max)
• This is interesting, never thought like this though I know this can be done. What advantages (statistical) does this have over just averaging the data?
– gbh.
Jan 12, 2015 at 15:36
• Truth be told, averaging probably gives you pretty similar results. One difference is that here, every point has the same impact on the result - if you average first and the groups have different sizes, then a point in a small group will have a larger impact on the end result than a point in a big group. Jan 12, 2015 at 15:39
• I edited the answer to add a bootstrapped confidence interval. This works fine for my toy data but may blow up if your estimated leading coefficient is close to zero. But try it! No estimate should come without a measure of its variability! Jan 12, 2015 at 15:51
• Thanks Stephan, how about this. For the bootstrap, I randomly sample one y for each of x1, x2, x3, x4, x5, i.e from the respective distributions. So in the end I have tuples of points (xi,yi), where i is 1 to 5. Note y1, is a random sample from the distribution [y11, y12, y13, ....]. I do this howsoever times I want, fit the quadratic, find the maxima. And then use this to find the confidence.
– gbh.
Jan 12, 2015 at 15:55
• That is also a possibility. It will essentially simulate having only one y per x, though, "forgetting" that you really have many more data points. It will thus overestimate the variance for the max estimate. Alternatively, do a stratified bootstrap, sampling with replacement within each x coordinate. That would probably be more appropriate for your data, anyway. Let me edit the question to stratify. (Not that it makes a big difference, anyways.) Jan 12, 2015 at 16:02
| 1,423
| 5,398
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.515625
| 4
|
CC-MAIN-2024-18
|
latest
|
en
| 0.913329
|
# Fitting a quadratic through 5 points, goal is to find the maximum
I have some physical experiments done at various locations. The locations produces a set of observations y for one value of x, the independent variable. In the end across a set of locations I have values in the following form
[y11, y21, y31, y41, y51, y61...] for one value of x, say x1
Then I repeat the experiment and get a new set of values
[y21, y22, y32, y42, y52, y62...] for a different value of x say x2
Ans so on. In the end I have readings for y for 5 distinct values of x, [x1, x2, x3, x4, x5]
I wish to fit a quadratic to this data, and my main goal is to find the value of x, for which y is maximum. One way to do this is to define an average y for each x1 and fit a quadratic with the averaged out values. I know that fitting a quadratic to just 5 points is not a good idea. I am open to other ideas, that can help me solve the problem without directly fitting a functional relationship. Some non-parametric idea for instance. One idea I have is to do some distribution analysis on values of y for x1, vs values of y for x2 and so on. This enables me to take all the y values for one x1, without averaging them. I would be open to other ideas and suggestions. • "without directly fitting a functional relationship" - why don't you want to fit this? (I assume you have good reasons for suspecting a quadratic relationship, right?) Jan 12, 2015 at 15:23
• Yes, I do. Well, primarily because I think 5 points is too little to infer the curvature and the slope etc. I guess what I mean to say is to find out a way to "not loose" the information by virtue of averaging the values of y, while fitting the quadratic. – gbh. Jan 12, 2015 at 15:26
• What causes variation in the measurment x within each location? Is it measurment error or somethong else? Jan 12, 2015 at 15:36
• Locational effects primarily plus yes some measurement error. – gbh. Jan 12, 2015 at 15:37
You are right that inferring a parabola from five points is too little data. But you have more than five points, namely all your measurements! Don't do any averaging (though this would already help), just fit the parabola to all your data. Model fitting doesn't mind multiple x values. Let's do this in R. Some dummy data over five different x values:
set.seed(1)
xx <- rep(1:5,each=10)
yy <- -xx^2+6*xx-5+rnorm(length(xx),0,1)
Now we can fit the model. Note the I() to protect your square term, and note that this really presupposes homoskedastic errors:
model <- lm(yy~xx+I(xx^2))
Now we have a quadratic relationship. Some elementary calculus gives us the (estimated) x coordinate of maximum of the fitted parabola:
xx.max <- -coef(model)[2]/(2*coef(model)[3])
It's always a good idea to get an idea about the variability of our results. It may be possibly to derive a confidence interval for xx.max analytically, but the bootstrap is always easier, and given enough data (50 points should be enough), it should be valid:
require(boot)
foo <- boot(data=data.frame(xx=xx,yy=yy),statistic=function(data,indices){
model <- lm(yy~xx+I(xx^2),data[indices,])
-coef(model)[2]/(2*coef(model)[3])},
strata=xx,
R=1000)
Note that I am doing a stratified bootstrap, i.e., I am sampling with replacement within each x value, which makes sense here, since the data really are stratified.
|
So we can plot our points, the fitted parabola and the x coordinate of the estimated maximum together with the bootstrapped confidence interval:
plot(xx,yy,pch=19)
rect(quantile(foo$t,0.025),-2,quantile(foo$t,0.975),6,border=NA,col="lightgrey")
points(xx,yy,pch=19)
xx.plot <- seq(1,5,by=.01)
lines(xx.plot,predict(model,newdata=data.frame(xx=xx.plot)))
abline(v=xx.max)
• This is interesting, never thought like this though I know this can be done.
|
https://math.stackexchange.com/questions/2022498/area-of-a-circle-from-equation-for-circle
| 1,566,597,371,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-35/segments/1566027319082.81/warc/CC-MAIN-20190823214536-20190824000536-00170.warc.gz
| 561,045,759
| 30,281
|
# Area of a circle from equation for circle
Q. Equation of circle- $2x^2+ \lambda xy+2y^2+( \lambda -4)x+6y-5=0$ find area of the circle.
Attempt- For converting the equation from second degree to first degree $\lambda xy=0$.
Thus, $\lambda =0$ and-
$$(\lambda -4)x = 2gx$$ $$6y=2fy$$ $$c=-5$$ $$g=-2, f=3, c=-5$$
Radius of circle = $\sqrt{4+9+5}=\sqrt{18}$
Area of circle= $\pi *18$
But the answer is $\frac {23}{4} * \pi$
• Your argument would be correct if the coefficients of $x^2$ and $y^2$ were $1$. But they are not. – Leo163 Nov 20 '16 at 11:56
• Do they need to be equal? Correct me but shouldn't just their coefficient be equal? To satisfy $a=b$ where a and b are coefficients of x and y respectively – Akshat Batra Nov 20 '16 at 11:58
• @AkshatBatra Yes, the coefficients of the quadratic expressions for $\;x\,,\,\,y\;$ must be equal if we have a circle (otherwise it is an ellipse), but then you must divide through the whole equation by that common coefficient, and that affects the radius...! – DonAntonio Nov 20 '16 at 11:59
Complete squares after putting $\;\lambda xy=0\implies \lambda =0\;$:
$$0=2x^2+2y^2-4x+6y-5=2(x-1)^2-2+2\left(y-\frac32\right)^2-\frac92-5\implies$$
$$\implies2(x-1)^2+2\left(y-\frac32\right)^2=\frac{23}2\implies(x-1)^2+\left(y-\frac32\right)^2=\frac{23}4$$
and we have a circle of radius $\;\sqrt{\frac{23}4}\;$ , so its area is
$$\pi\sqrt{\frac{23}4}^2=\frac{23\pi}4$$
| 510
| 1,420
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.4375
| 4
|
CC-MAIN-2019-35
|
latest
|
en
| 0.775057
|
# Area of a circle from equation for circle
Q. Equation of circle- $2x^2+ \lambda xy+2y^2+( \lambda -4)x+6y-5=0$ find area of the circle. Attempt- For converting the equation from second degree to first degree $\lambda xy=0$. Thus, $\lambda =0$ and-
$$(\lambda -4)x = 2gx$$ $$6y=2fy$$ $$c=-5$$ $$g=-2, f=3, c=-5$$
Radius of circle = $\sqrt{4+9+5}=\sqrt{18}$
Area of circle= $\pi *18$
But the answer is $\frac {23}{4} * \pi$
• Your argument would be correct if the coefficients of $x^2$ and $y^2$ were $1$. But they are not. – Leo163 Nov 20 '16 at 11:56
• Do they need to be equal? Correct me but shouldn't just their coefficient be equal? To satisfy $a=b$ where a and b are coefficients of x and y respectively – Akshat Batra Nov 20 '16 at 11:58
• @AkshatBatra Yes, the coefficients of the quadratic expressions for $\;x\,,\,\,y\;$ must be equal if we have a circle (otherwise it is an ellipse), but then you must divide through the whole equation by that common coefficient, and that affects the radius...!
|
– DonAntonio Nov 20 '16 at 11:59
Complete squares after putting $\;\lambda xy=0\implies \lambda =0\;$:
$$0=2x^2+2y^2-4x+6y-5=2(x-1)^2-2+2\left(y-\frac32\right)^2-\frac92-5\implies$$
$$\implies2(x-1)^2+2\left(y-\frac32\right)^2=\frac{23}2\implies(x-1)^2+\left(y-\frac32\right)^2=\frac{23}4$$
and we have a circle of radius $\;\sqrt{\frac{23}4}\;$ , so its area is
$$\pi\sqrt{\frac{23}4}^2=\frac{23\pi}4$$
|
http://math.stackexchange.com/questions/266381/prove-that-int-01-psix-sin2-n-pi-x-space-mathrmdx-frac-pi2
| 1,429,986,160,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2015-18/segments/1429246650671.76/warc/CC-MAIN-20150417045730-00310-ip-10-235-10-82.ec2.internal.warc.gz
| 186,533,742
| 18,749
|
# Prove that $\int_0^1 \psi{(x) \sin(2 n \pi x)} \space\mathrm{dx}=-\frac{\pi}{2}$
Prove that $$\int_0^1 \psi{(x) \sin(2 n \pi x)} \space\mathrm{dx}=-\frac{\pi}{2}, \space n\ge1$$ where $\psi(x)$ - digamma function
-
What is $\psi(x)$? – Qiaochu Yuan Dec 28 '12 at 9:44
The claim is clearly wrong for $n=0$. – Hagen von Eitzen Dec 28 '12 at 9:53
An interesting commentary for the Riemann-Lebesgue lemma! – GEdgar Dec 28 '12 at 14:13
By the log-differentiation of the Euler's reflection formula, we have
$$\psi_0(x) - \psi_0(1-x) = -\pi \cot (\pi x).$$
Thus we have
\begin{align*} \int_{0}^{1}\psi_0(x) \sin (2\pi n x) \, dx &= \frac{1}{2}\int_{0}^{1}\psi_0(x) \sin (2\pi n x) \, dx - \frac{1}{2}\int_{0}^{1}\psi_0(1-x) \sin (2\pi n x) \, dx \\ &= -\frac{\pi}{2} \int_{0}^{1} \frac{\sin (2\pi n x)}{\sin (\pi x)} \, \cos (\pi x) \, dx \\ &= -\frac{1}{2} \int_{0}^{\pi} \frac{\sin (2 n \theta)}{\sin \theta} \, \cos \theta \, d\theta \\ &= - \int_{0}^{\frac{\pi}{2}} \frac{\sin (2 n \theta)}{\sin \theta} \, \cos \theta \, d\theta. \end{align*}
Now the rest follows by my blog posting.
-
@Chris'ssister, thank you. :) – sos440 Dec 28 '12 at 10:04
@sos440 By the way, cool blog you have out there. – Sasha Dec 28 '12 at 14:36
@sos440: I like your use of the reflection formula. (+1) – user26872 Dec 29 '12 at 20:08
Here's another approach using the integral representation for $\psi$. We assume $n$ is an integer greater than or equal to one. Then $$\begin{eqnarray*} \int_0^1 dx\, \sin(2n\pi x) \psi(x) &=& \int_0^1 dx\, \sin(2n\pi x) \int_0^\infty dt\, \left( \frac{e^{-t}}{t} - \frac{e^{-x t}}{1-e^{-t}} \right) \\ &=& \int_0^\infty dt\, \left( \frac{e^{-t}}{t} \int_0^1 dx\, \sin(2n\pi x) - \frac{1}{1-e^{-t}} \int_0^1 dx\, \sin(2n\pi x)e^{-x t} \right). \end{eqnarray*}$$ But $\int_0^1 dx\, \sin(2n\pi x) = 0$ and $$\int_0^1 dx\, \sin(2n\pi x)e^{-x t} = \frac{2n\pi}{t^2+4n^2\pi^2}(1-e^{-t}).$$ (Details for the second integral can be given if necessary.) Therefore $$\begin{eqnarray*} \int_0^1 dx\, \sin(2n\pi x) \psi(x) &=& -\int_0^\infty dt\, \frac{2n\pi}{t^2+4n^2\pi^2} \\ &=& -\frac{\pi}{2}. \end{eqnarray*}$$
-
${(\text{precious})}^{\text{precious}}$ (+1) :-) Your way is very short and easy. Thanks! – Chris's sis Dec 29 '12 at 20:11
@Chris'ssister: Glad to help. I had not seen this interesting integral before. (+1) – user26872 Dec 29 '12 at 21:08
| 1,029
| 2,376
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.59375
| 4
|
CC-MAIN-2015-18
|
longest
|
en
| 0.541777
|
# Prove that $\int_0^1 \psi{(x) \sin(2 n \pi x)} \space\mathrm{dx}=-\frac{\pi}{2}$
Prove that $$\int_0^1 \psi{(x) \sin(2 n \pi x)} \space\mathrm{dx}=-\frac{\pi}{2}, \space n\ge1$$ where $\psi(x)$ - digamma function
-
What is $\psi(x)$? – Qiaochu Yuan Dec 28 '12 at 9:44
The claim is clearly wrong for $n=0$. – Hagen von Eitzen Dec 28 '12 at 9:53
An interesting commentary for the Riemann-Lebesgue lemma! – GEdgar Dec 28 '12 at 14:13
By the log-differentiation of the Euler's reflection formula, we have
$$\psi_0(x) - \psi_0(1-x) = -\pi \cot (\pi x).$$
Thus we have
\begin{align*} \int_{0}^{1}\psi_0(x) \sin (2\pi n x) \, dx &= \frac{1}{2}\int_{0}^{1}\psi_0(x) \sin (2\pi n x) \, dx - \frac{1}{2}\int_{0}^{1}\psi_0(1-x) \sin (2\pi n x) \, dx \\ &= -\frac{\pi}{2} \int_{0}^{1} \frac{\sin (2\pi n x)}{\sin (\pi x)} \, \cos (\pi x) \, dx \\ &= -\frac{1}{2} \int_{0}^{\pi} \frac{\sin (2 n \theta)}{\sin \theta} \, \cos \theta \, d\theta \\ &= - \int_{0}^{\frac{\pi}{2}} \frac{\sin (2 n \theta)}{\sin \theta} \, \cos \theta \, d\theta. \end{align*}
Now the rest follows by my blog posting. -
@Chris'ssister, thank you. :) – sos440 Dec 28 '12 at 10:04
@sos440 By the way, cool blog you have out there. – Sasha Dec 28 '12 at 14:36
@sos440: I like your use of the reflection formula. (+1) – user26872 Dec 29 '12 at 20:08
Here's another approach using the integral representation for $\psi$. We assume $n$ is an integer greater than or equal to one. Then $$\begin{eqnarray*} \int_0^1 dx\, \sin(2n\pi x) \psi(x) &=& \int_0^1 dx\, \sin(2n\pi x) \int_0^\infty dt\, \left( \frac{e^{-t}}{t} - \frac{e^{-x t}}{1-e^{-t}} \right) \\ &=& \int_0^\infty dt\, \left( \frac{e^{-t}}{t} \int_0^1 dx\, \sin(2n\pi x) - \frac{1}{1-e^{-t}} \int_0^1 dx\, \sin(2n\pi x)e^{-x t} \right). \end{eqnarray*}$$ But $\int_0^1 dx\, \sin(2n\pi x) = 0$ and $$\int_0^1 dx\, \sin(2n\pi x)e^{-x t} = \frac{2n\pi}{t^2+4n^2\pi^2}(1-e^{-t}).$$ (Details for the second integral can be given if necessary.)
|
Therefore $$\begin{eqnarray*} \int_0^1 dx\, \sin(2n\pi x) \psi(x) &=& -\int_0^\infty dt\, \frac{2n\pi}{t^2+4n^2\pi^2} \\ &=& -\frac{\pi}{2}.
|
http://math.stackexchange.com/questions/276501/algebraic-number-theory-integral-basis?answertab=oldest
| 1,462,077,613,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-18/segments/1461860114285.32/warc/CC-MAIN-20160428161514-00138-ip-10-239-7-51.ec2.internal.warc.gz
| 185,081,123
| 18,138
|
Algebraic Number Theory - Integral Basis
Let $K$ be a number field with $[K:Q] =n$. Let $O_k$ be its ring of algebraic integers.
I understand how there is an integral basis for $Q$, i.e. $\exists$ a $Q$-basis of $K$ consisting of elements of $O_k$. Let this integral basis be denoted by $\omega_1, \omega_2, \dots, \omega_n \in O_k$.
However, I do not understand how this leads to the fact that
$$\bigoplus_{i=1}^n Z\omega_i \subseteq O_k$$
Could someone elaborate please? Thank you.
-
Gerry Myerson's answer is correct, but if his "of course" isn't obvious to you, then you should look in a textbook for the theorem that the sum and the product of any two algebraic integers is again an algebraic integer. (You implicitly used this fact in referring to "its ring of algebraic integers", but you might not have been aware of it.) – Andreas Blass Jan 12 '13 at 19:43
Thanks Andreas, appreciate it. I was confused because the text that I looked at stated this fact as a consequence of integral basis. But from Gerry's answer, I now understand that it applies for any $\omega_1,\omega_2,\dots,\omega_n \in O_k$, and actually could be stated before the exposition of integral basis in the text. – Conan Wong Jan 12 '13 at 19:46
That sum is (isomorphic to) the set of all numbers $\sum a_i\omega_i$ where the $a_i$ are integers. But if the $\omega_i$ are in $O_k$ then of course any integer linear combination of them is also in $O_k$.
-
Gerry, thank you. But then this would be true for any $\omega_1,\omega_2,\dots,\omega_n \in O_k$? i.e. they do not have to be an integral basis of $K$ for the fact to be true. – Conan Wong Jan 12 '13 at 19:35
@YACP I was confused because a text that I looked at stated this fact as a consequence of integral basis. Please do not assume that people who post questions that are trivial for you have not thought about them or looked in books beforehand. – Conan Wong Jan 12 '13 at 19:41
@YACP, on this site, where peoploe regularly ask for $2+2$, no question about an integral basis for the ring of integers in an algebraic number field is trivial. – Gerry Myerson Jan 12 '13 at 19:45
The text may be trying to make the following point:
• if $\omega_1,\ldots,\omega_r$ are any elements of $O_k$ then their $\mathbb Z$-span, which one might denote by $$\sum_i \mathbb Z \omega_i,$$ is contained in $O_k$ (since $O_k$ is closed under addition).
• However, unless that $\omega_i$ are linearly independent over $\mathbb Z$, their span in $O_k$ won't be isomorphic to the direct sum of the $\mathbb Z \omega_i$.
There is always a natural surjection $$\bigoplus_i \mathbb Z_i \to \sum_i \mathbb Z \omega_i$$ (the source being the direct sum and the target being the span in $O_k$), but in general it has a kernel; indeed, the kernel is the collection of all linear dependence relations between the $\omega_i$.
Now if the $\omega_i$ are an integral basis, then they are linearly independent over $\mathbb Z$, and so the direct sum is embedded into $O_k$.
-
Thank you Matt. This is very useful too. – Conan Wong Jan 13 '13 at 2:26
| 853
| 3,066
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.578125
| 4
|
CC-MAIN-2016-18
|
latest
|
en
| 0.93146
|
Algebraic Number Theory - Integral Basis
Let $K$ be a number field with $[K:Q] =n$. Let $O_k$ be its ring of algebraic integers. I understand how there is an integral basis for $Q$, i.e. $\exists$ a $Q$-basis of $K$ consisting of elements of $O_k$. Let this integral basis be denoted by $\omega_1, \omega_2, \dots, \omega_n \in O_k$.
|
However, I do not understand how this leads to the fact that
$$\bigoplus_{i=1}^n Z\omega_i \subseteq O_k$$
Could someone elaborate please?
|
https://math.stackexchange.com/questions/1639335/show-a-limsup-limitsn-to-inftya-n
| 1,563,325,589,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-30/segments/1563195525004.24/warc/CC-MAIN-20190717001433-20190717023433-00070.warc.gz
| 481,639,412
| 36,091
|
# Show $A=\limsup_\limits{n\to\infty}a_n$.
Let $\{a_n\}$ be a sequence of real numbers bounded from above, $A\in \Bbb R$.
Given any $\epsilon>0$,
a)$\exists n_0 \in \Bbb N$ such that $a_n<A+\epsilon$ for all $n\ge n_0$.
b)$\exists k\ge n_0$ such that $a_k>A-\epsilon$.
If the sequence satisfies the above two properties, show that $A=\limsup_\limits{n\to\infty}a_n$.
I know the definition of the limit superior as:$\limsup a_n = \inf_{\forall m} \sup_{n \ge m} a_n$. Also, if ($a_n$) is a real sequence bounded from above. Let $S :=$ {$t \in \Bbb R:$ $t$ is the limit of a convergent subsequence of ($a_n$) }. Then $A = sup S$. I've proved the opposite direction (i.e. Given $A=\limsup_{n\to\infty}a_n$, then it has the following two properties), but stuck on trying to prove the two properties imply A. Could someone provide a precise proof of this please? Thanks.
• Check out the \limits I added to the title, it makes the expression look cooler. – YoTengoUnLCD Feb 4 '16 at 3:34
For any $\epsilon >0$ there exists $n_0 \in \mathbb{N}$ such that for all $n \geqslant n_0$ we have $a_n < A + \epsilon$ and $A- \epsilon < a_k$ for some $k \geqslant n_0$.
Hence,
$$A - \epsilon \leqslant \sup_{n \geqslant n_0}a_n \leqslant A + \epsilon.$$
For any $m > n_0$ we have $\sup_{n \geqslant m}a_n \leqslant \sup_{n \geqslant n_0}a_n$ since $[m,\infty) \subset [n_0,\infty).$ Also there exists $k > m$ such that $A-\epsilon < a_k$.
Whence, it follows that
$$A - \epsilon \leqslant \sup_{n \geqslant m}a_n \leqslant A + \epsilon.$$
By definition, $\limsup_{n \to \infty}a_n = \inf_{m} \sup_{n \geqslant m}a_n$ and, since $\sup_{n \geqslant m}a_n$ is decreasing and bounded below, $\limsup_{n \to \infty}a_n= \lim_{m \to \infty}\sup_{n \geqslant m}a_n.$
Therefore, we have for any $\epsilon > 0$
$$A - \epsilon \leqslant \limsup_{n \to \infty}a_n \leqslant A + \epsilon.$$
Now you can reach the conclusion.
• Can I say that hence $A-\epsilon \le lim sup a_n \le A+\epsilon$, and since $\epsilon$ is arbitrary, this can only happen if lim sup $a_n$ = $A$. – user57891 Feb 4 '16 at 3:21
• That is the final step, but there are some intermediate steps. I can expand above if you don't see it. – RRL Feb 4 '16 at 3:24
• If you could prove it in details that would be great, thanks! – user57891 Feb 4 '16 at 3:27
| 840
| 2,315
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.75
| 4
|
CC-MAIN-2019-30
|
latest
|
en
| 0.641532
|
# Show $A=\limsup_\limits{n\to\infty}a_n$. Let $\{a_n\}$ be a sequence of real numbers bounded from above, $A\in \Bbb R$. Given any $\epsilon>0$,
a)$\exists n_0 \in \Bbb N$ such that $a_n<A+\epsilon$ for all $n\ge n_0$. b)$\exists k\ge n_0$ such that $a_k>A-\epsilon$. If the sequence satisfies the above two properties, show that $A=\limsup_\limits{n\to\infty}a_n$. I know the definition of the limit superior as:$\limsup a_n = \inf_{\forall m} \sup_{n \ge m} a_n$. Also, if ($a_n$) is a real sequence bounded from above. Let $S :=$ {$t \in \Bbb R:$ $t$ is the limit of a convergent subsequence of ($a_n$) }. Then $A = sup S$. I've proved the opposite direction (i.e. Given $A=\limsup_{n\to\infty}a_n$, then it has the following two properties), but stuck on trying to prove the two properties imply A. Could someone provide a precise proof of this please? Thanks. • Check out the \limits I added to the title, it makes the expression look cooler. – YoTengoUnLCD Feb 4 '16 at 3:34
For any $\epsilon >0$ there exists $n_0 \in \mathbb{N}$ such that for all $n \geqslant n_0$ we have $a_n < A + \epsilon$ and $A- \epsilon < a_k$ for some $k \geqslant n_0$. Hence,
$$A - \epsilon \leqslant \sup_{n \geqslant n_0}a_n \leqslant A + \epsilon.$$
For any $m > n_0$ we have $\sup_{n \geqslant m}a_n \leqslant \sup_{n \geqslant n_0}a_n$ since $[m,\infty) \subset [n_0,\infty).$ Also there exists $k > m$ such that $A-\epsilon < a_k$. Whence, it follows that
$$A - \epsilon \leqslant \sup_{n \geqslant m}a_n \leqslant A + \epsilon.$$
By definition, $\limsup_{n \to \infty}a_n = \inf_{m} \sup_{n \geqslant m}a_n$ and, since $\sup_{n \geqslant m}a_n$ is decreasing and bounded below, $\limsup_{n \to \infty}a_n= \lim_{m \to \infty}\sup_{n \geqslant m}a_n.$
Therefore, we have for any $\epsilon > 0$
$$A - \epsilon \leqslant \limsup_{n \to \infty}a_n \leqslant A + \epsilon.$$
Now you can reach the conclusion.
|
• Can I say that hence $A-\epsilon \le lim sup a_n \le A+\epsilon$, and since $\epsilon$ is arbitrary, this can only happen if lim sup $a_n$ = $A$.
|
https://cs.stackexchange.com/questions/146113/is-it-np-hard-to-find-different-roots-of-different-matrices-simultaneously
| 1,643,185,243,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-05/segments/1642320304928.27/warc/CC-MAIN-20220126071320-20220126101320-00135.warc.gz
| 240,063,361
| 33,804
|
# Is it NP-hard to find different roots of different matrices simultaneously?
Consider the following problem:
• input: pairwise distinct natural numbers $$k_1,\dots,k_m$$ that are all $$\leq n$$, and matrices $$A_1,\dots,A_m \in \Bbb Q^{n \times n}$$ where $$m \leq n$$.
• output: a matrix $$B \in \Bbb Q^{n \times n}$$ such that $$B^{k_i}=A_{i}$$ for every $$i \leq m$$, if such matrix exists. If no such matrix exists, the output is None.
Is the above problem NP-hard?
It's tempting to think that we can just compute the $$k_i$$th root of each $$A_i$$ somehow and check whether they all give the same answer, but that is too slow: with matrices, there can be exponentially many roots.
• Why are you interested in NP-hardness of this problem? What’s your original motivation? (This may answer all our clarification questions. Also, it might be the case that you didn’t take the best approach for the original problem (the lack of precision is a sign of that), and we might point you to a better one) Nov 28 '21 at 0:20
• @Dmitry I'm working with my colleagues on a paper where we experimentally demonstrate a new heuristic way to solve a certain problem efficiently. The most obvious approach to solve that problem is via solving the problem that I posted above, so it would be nice to show that the above problem is difficult.
– Haim
Nov 28 '21 at 0:33
• Idea: let $K=k_1 \times \cdots \times k_m$, then if such a $B$ exists, we must have $A_1^{K/k_1} = \cdots = A_m^{K/k_m}$. Unfortunately I don't think the converse holds, so I don't think this yields an efficient algorithm for the problem, alas.
– D.W.
Nov 28 '21 at 0:34
• This probably means that you are interested in it’s hardness, not NP-hardness in particular. E.g. it may be undecidable but not NP-hard. It’s not evident to me that the problem is decidable (may be evident to you though). Nov 28 '21 at 0:56
• @Dmitry What makes it undecidable? Assuming that you can compute all $k_1$th roots of $A_1$ (which I believe is the case), you can just compute the $k_i$th powers of each of them for each $i$ and see if any of them satisfies the desired requirements. Furthermore, given $B$ as required by the output, you can verify in polynomial time that $B$ is as required, and so this problem is in NP. Am I missing something here?
– Haim
Nov 28 '21 at 1:08
| 633
| 2,323
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.59375
| 4
|
CC-MAIN-2022-05
|
latest
|
en
| 0.90552
|
# Is it NP-hard to find different roots of different matrices simultaneously? Consider the following problem:
• input: pairwise distinct natural numbers $$k_1,\dots,k_m$$ that are all $$\leq n$$, and matrices $$A_1,\dots,A_m \in \Bbb Q^{n \times n}$$ where $$m \leq n$$. • output: a matrix $$B \in \Bbb Q^{n \times n}$$ such that $$B^{k_i}=A_{i}$$ for every $$i \leq m$$, if such matrix exists. If no such matrix exists, the output is None. Is the above problem NP-hard? It's tempting to think that we can just compute the $$k_i$$th root of each $$A_i$$ somehow and check whether they all give the same answer, but that is too slow: with matrices, there can be exponentially many roots. • Why are you interested in NP-hardness of this problem? What’s your original motivation? (This may answer all our clarification questions. Also, it might be the case that you didn’t take the best approach for the original problem (the lack of precision is a sign of that), and we might point you to a better one) Nov 28 '21 at 0:20
• @Dmitry I'm working with my colleagues on a paper where we experimentally demonstrate a new heuristic way to solve a certain problem efficiently. The most obvious approach to solve that problem is via solving the problem that I posted above, so it would be nice to show that the above problem is difficult.
|
– Haim
Nov 28 '21 at 0:33
• Idea: let $K=k_1 \times \cdots \times k_m$, then if such a $B$ exists, we must have $A_1^{K/k_1} = \cdots = A_m^{K/k_m}$.
|
https://math.stackexchange.com/questions/433864/under-what-conditions-is-aba-b-a2-b2-for-two-n-times-n-matrice
| 1,627,340,652,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-31/segments/1627046152156.49/warc/CC-MAIN-20210726215020-20210727005020-00712.warc.gz
| 397,941,932
| 38,208
|
Under what conditions is $(A+B)(A-B) = (A^2 - B^2)$ for two $n \times n$ matrices $A$ and $B$?
so my approach to this problem was
to view $A$ as a matrix of the form $\begin{bmatrix}a_1 & a_2 & a_3 & \dots & a_n\end{bmatrix}$ and $B$ as $\begin{bmatrix}b_1 & b_2 & b_3 & \dots & b_n\end{bmatrix}$
define variable $C = (A+B)$ and $D = (A-B)$
$C = \begin{bmatrix}c_1 & c_2 & c_3 & \dots & c_n\end{bmatrix}$ $D = \begin{bmatrix}d_1 & d_2 & d_3 & \dots & d_n\end{bmatrix}$
$CD = A^2 - B^2$
$Cd_1 + Cd_2 +\dots + Cd_n = (a_1A - b_1B) + (a_2A + b_2B) + (a_3B + b_3B) + \dots + (a_nB + b_nB)$
substitute back for $d$ and $C$
$(a_1-b_1)(A+B) + (a_2 - b_2)(A+B) + \dots = (a_1A - b_1B) + (a_2A + b_2B) + (a_3B + b_3B)+ \dots$
$(a_1A + A_1B -B_1A - b_1B) + (a_2A + A_2B -B_2A - b_2B) + \dots = (a_1A - b_1B) + (a_2A + b_2B) + \dots$
at this point I'm stuck, because I feel like I'm on the wrong track because my answer does not match correctly with the solution. How does one solve this problem?
You can make it simpler. $$(A+B)(A-B)=A^2+BA-AB-B^2$$ When is this equal to $A^2-B^2$?
$(A+B)(A-B)=A^2+BA-AB-B^2$ its equal to $A^2-B^2$ when $AB=BA$ i mean A,B must commute
| 546
| 1,170
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.15625
| 4
|
CC-MAIN-2021-31
|
latest
|
en
| 0.65039
|
Under what conditions is $(A+B)(A-B) = (A^2 - B^2)$ for two $n \times n$ matrices $A$ and $B$? so my approach to this problem was
to view $A$ as a matrix of the form $\begin{bmatrix}a_1 & a_2 & a_3 & \dots & a_n\end{bmatrix}$ and $B$ as $\begin{bmatrix}b_1 & b_2 & b_3 & \dots & b_n\end{bmatrix}$
define variable $C = (A+B)$ and $D = (A-B)$
$C = \begin{bmatrix}c_1 & c_2 & c_3 & \dots & c_n\end{bmatrix}$ $D = \begin{bmatrix}d_1 & d_2 & d_3 & \dots & d_n\end{bmatrix}$
$CD = A^2 - B^2$
$Cd_1 + Cd_2 +\dots + Cd_n = (a_1A - b_1B) + (a_2A + b_2B) + (a_3B + b_3B) + \dots + (a_nB + b_nB)$
substitute back for $d$ and $C$
$(a_1-b_1)(A+B) + (a_2 - b_2)(A+B) + \dots = (a_1A - b_1B) + (a_2A + b_2B) + (a_3B + b_3B)+ \dots$
$(a_1A + A_1B -B_1A - b_1B) + (a_2A + A_2B -B_2A - b_2B) + \dots = (a_1A - b_1B) + (a_2A + b_2B) + \dots$
at this point I'm stuck, because I feel like I'm on the wrong track because my answer does not match correctly with the solution. How does one solve this problem? You can make it simpler. $$(A+B)(A-B)=A^2+BA-AB-B^2$$ When is this equal to $A^2-B^2$?
|
$(A+B)(A-B)=A^2+BA-AB-B^2$ its equal to $A^2-B^2$ when $AB=BA$ i mean A,B must commute
|
https://stats.stackexchange.com/questions/485532/how-i-can-find-the-px-1-leq-k-frac-sum-i-1i-n-x-i-n-t-when
| 1,716,106,671,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-22/segments/1715971057774.18/warc/CC-MAIN-20240519070539-20240519100539-00098.warc.gz
| 493,359,860
| 39,394
|
# How i can find the $P(X_1 \leq k | \frac{\sum_{i=1}^{i=n} X_i }{n} = t)$ when $\{X_i\}_{i=1...n}$ follow a continuous distribution?
More precisely i have the following problem:
given a sample of r.v. $$\{X_i\}_{i=1...n}$$ i.i.d. distributed with $$f_{\theta}(x) = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(x-\theta)^2}$$ and the statistic $$T(X_1,...,X_n) = \frac{\sum_{i=1}^{i=n} X_i }{n}$$ for $$\theta$$, let's consider $$g(\theta) = P(X_1 \leq y) = \int_{- \infty}^{y}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(x-\theta)^2}dx$$ and the function $$u(t) = E[\mathbb{1}_{X_1 \leq y}(X_1,...,X_n)|T = t],$$ where $$\mathbb{1}_{X_1 \leq y}(X_1,...,X_n)$$ is the indicator function of the event $$\{X_1 \leq y\}$$.
Question : Evaluate $$u(t)$$ and verify that $$u(t)$$ doesn't depend on $$\theta$$, and evaluate the $$Var_{\theta}(u(T(X_1,...,X_n)))$$.
Obviously $$u(t) = E[\mathbb{1}_{X_1 \leq y}(X_1,...,X_n)|T = t] = P(X_1 \leq y |\sum_{i=1}^{i=n} X_i = nt)$$, but how i can evaluate it? I tried to apply the trasformation in Find the joint distribution of $X_1$ and $\sum_{i=1}^n X_i$, that is a similar problem but with a different distribution function, but give me nothing because if i calculate the integral, in order to find the conditional probability, remain the dependency on $$\theta$$ and the integral is not computable. Probably the approach is different.
I have no idea how to approach it!
Generally, how i can evaluate $$u(t)$$ when there are continuos distributions? There is a common startegy to evaluate this integral?
• Here is a hint: $P(X_1\le y|\sum_{i=1^n} X_i=nt)=P(X_1\le y|X_1 = nt - \sum_{i=2}^n X_i)$. Then, what is the result if $nt - \sum_{i=2}^n X_i$ is equal, say, to $y+42$ ?
– TMat
Sep 1, 2020 at 11:30
• Does this answer your question? UMVUE for probability of cutoff Sep 1, 2020 at 11:46
• @Xi'an in this case, how can i do that? Because the approach in the link that I mentioned doesn't work, there are too much integrals that are not computable. And why if $\{X_i\}_i$ are indipendent then their sum are also indipendent from $X_1$?
– Tazz
Sep 1, 2020 at 12:49
• You can immediately reduce the problem to one with two variables, $X_1$ and $X_2+\cdots+X_n,$ which have a joint Normal distribution.
– whuber
Sep 1, 2020 at 13:41
• I am afraid that, if this Normality step is beyond your reach, the homework question may prove too advanced for your probability skills. Sep 1, 2020 at 16:31
| 858
| 2,422
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.703125
| 4
|
CC-MAIN-2024-22
|
latest
|
en
| 0.829503
|
# How i can find the $P(X_1 \leq k | \frac{\sum_{i=1}^{i=n} X_i }{n} = t)$ when $\{X_i\}_{i=1...n}$ follow a continuous distribution? More precisely i have the following problem:
given a sample of r.v. $$\{X_i\}_{i=1...n}$$ i.i.d. distributed with $$f_{\theta}(x) = \frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(x-\theta)^2}$$ and the statistic $$T(X_1,...,X_n) = \frac{\sum_{i=1}^{i=n} X_i }{n}$$ for $$\theta$$, let's consider $$g(\theta) = P(X_1 \leq y) = \int_{- \infty}^{y}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(x-\theta)^2}dx$$ and the function $$u(t) = E[\mathbb{1}_{X_1 \leq y}(X_1,...,X_n)|T = t],$$ where $$\mathbb{1}_{X_1 \leq y}(X_1,...,X_n)$$ is the indicator function of the event $$\{X_1 \leq y\}$$. Question : Evaluate $$u(t)$$ and verify that $$u(t)$$ doesn't depend on $$\theta$$, and evaluate the $$Var_{\theta}(u(T(X_1,...,X_n)))$$. Obviously $$u(t) = E[\mathbb{1}_{X_1 \leq y}(X_1,...,X_n)|T = t] = P(X_1 \leq y |\sum_{i=1}^{i=n} X_i = nt)$$, but how i can evaluate it? I tried to apply the trasformation in Find the joint distribution of $X_1$ and $\sum_{i=1}^n X_i$, that is a similar problem but with a different distribution function, but give me nothing because if i calculate the integral, in order to find the conditional probability, remain the dependency on $$\theta$$ and the integral is not computable. Probably the approach is different. I have no idea how to approach it! Generally, how i can evaluate $$u(t)$$ when there are continuos distributions? There is a common startegy to evaluate this integral? • Here is a hint: $P(X_1\le y|\sum_{i=1^n} X_i=nt)=P(X_1\le y|X_1 = nt - \sum_{i=2}^n X_i)$.
|
Then, what is the result if $nt - \sum_{i=2}^n X_i$ is equal, say, to $y+42$ ?
|
https://math.stackexchange.com/questions/2947480/finding-area-of-largest-rectangle-between-the-axes-and-a-line/2947496
| 1,709,362,721,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-10/segments/1707947475757.50/warc/CC-MAIN-20240302052634-20240302082634-00458.warc.gz
| 386,424,157
| 35,561
|
# Finding area of largest rectangle between the axes and a line
The question is as follows: Find the area of the largest rectangle that has sides parallel to the coordinate axes, one corner at the origin and the opposite corner on the line 3x+2y=12 in the first quadrant.
I get that the equation I have to maximize is in the form of A=bh but I don't know how to eliminate one of the variables to continue.
• Hint: If the opposite corner has coordinates $(h,k)$, then $3h+2k=12$ and the area is given by $A=hk$. Oct 8, 2018 at 19:10
Since the bottom left corner of the rectangle is at the origin, then the $$(x,\,y)$$ coordinates of the top right corner will be the base and height (draw a figure to help visualize). We know that this point is on the line $$3x+2y=12$$, so that $$y=-\frac{3}{2}x+6$$. Plugging this in gives $$A=x(-\frac{3}{2}x+6)$$, which has only one variable.
Suppose that your rectangle has vertices $$(0, 0)$$, $$(x, 0)$$, $$(0, y)$$, and $$(x, y)$$, where $$x > 0$$, $$y > 0$$, and $$3x + 2y = 12. \tag{1}$$ Then the area of your rectangle is given by $$A = xy. \tag{2}$$ But (1) implies that $$y = 6 - \frac{3x}{2}. \tag{3}$$ Putting the value of $$y$$ from (3) into the formula in (2), we obtain $$A = A(x) = x \left( 6 - \frac{3x}{2} \right) = 6x - \frac{3x^2}{2}. \tag{4}$$ Now (4) gives area $$A$$ as a function of $$x$$ for $$x > 0$$.
Differentiating both sides of (4) w.r.t. $$x$$ we obtain $$A^\prime(x) = 6 - 3x.$$ Thus we see that $$A^\prime(x) \ \begin{cases} > 0 \ & \ \mbox{ for } x < 2, \\ = 0 \ & \ \mbox{ for } x = 2, \\ < 0 \ & \ \mbox{ for } x > 2. \end{cases}$$ Thus the area attains its (relative) maximum value at $$x = 2$$, and since this is the only relative extreme value of $$A$$, this is in fact the absolute maximum value of $$A$$.
Therefore the largest possible area is given by $$A(2) = 12 - 6 = 6.$$
Any point on the line can be represented as (x,(12-3x)/2)
Now we have to find the maximum area of a rectangle given one point as (0,0) and opposite point as $$(x,(12-3x)/2$$)
Area=$$(x)*(12-3x)/2$$
We have to maximize this area taking derivative and equating to $$0$$ we get value of $$x=2$$
Substituting this value in area we get area=6 which should be maximum
| 744
| 2,224
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.75
| 5
|
CC-MAIN-2024-10
|
latest
|
en
| 0.866014
|
# Finding area of largest rectangle between the axes and a line
The question is as follows: Find the area of the largest rectangle that has sides parallel to the coordinate axes, one corner at the origin and the opposite corner on the line 3x+2y=12 in the first quadrant. I get that the equation I have to maximize is in the form of A=bh but I don't know how to eliminate one of the variables to continue. • Hint: If the opposite corner has coordinates $(h,k)$, then $3h+2k=12$ and the area is given by $A=hk$. Oct 8, 2018 at 19:10
Since the bottom left corner of the rectangle is at the origin, then the $$(x,\,y)$$ coordinates of the top right corner will be the base and height (draw a figure to help visualize). We know that this point is on the line $$3x+2y=12$$, so that $$y=-\frac{3}{2}x+6$$. Plugging this in gives $$A=x(-\frac{3}{2}x+6)$$, which has only one variable. Suppose that your rectangle has vertices $$(0, 0)$$, $$(x, 0)$$, $$(0, y)$$, and $$(x, y)$$, where $$x > 0$$, $$y > 0$$, and $$3x + 2y = 12. \tag{1}$$ Then the area of your rectangle is given by $$A = xy. \tag{2}$$ But (1) implies that $$y = 6 - \frac{3x}{2}. \tag{3}$$ Putting the value of $$y$$ from (3) into the formula in (2), we obtain $$A = A(x) = x \left( 6 - \frac{3x}{2} \right) = 6x - \frac{3x^2}{2}. \tag{4}$$ Now (4) gives area $$A$$ as a function of $$x$$ for $$x > 0$$. Differentiating both sides of (4) w.r.t. $$x$$ we obtain $$A^\prime(x) = 6 - 3x.$$ Thus we see that $$A^\prime(x) \ \begin{cases} > 0 \ & \ \mbox{ for } x < 2, \\ = 0 \ & \ \mbox{ for } x = 2, \\ < 0 \ & \ \mbox{ for } x > 2. \end{cases}$$ Thus the area attains its (relative) maximum value at $$x = 2$$, and since this is the only relative extreme value of $$A$$, this is in fact the absolute maximum value of $$A$$.
|
Therefore the largest possible area is given by $$A(2) = 12 - 6 = 6.$$
Any point on the line can be represented as (x,(12-3x)/2)
Now we have to find the maximum area of a rectangle given one point as (0,0) and opposite point as $$(x,(12-3x)/2$$)
Area=$$(x)*(12-3x)/2$$
We have to maximize this area taking derivative and equating to $$0$$ we get value of $$x=2$$
Substituting this value in area we get area=6 which should be maximum
|
https://math.stackexchange.com/questions/2293877/diagonalization-proof-do-eigenvectors-of-an-eigenvalue-always-span-the-corresp
| 1,585,757,002,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2020-16/segments/1585370505731.37/warc/CC-MAIN-20200401130837-20200401160837-00319.warc.gz
| 504,260,946
| 32,597
|
# Diagonalization proof - Do eigenvectors of an eigenvalue always span the corresponding eigenspace?
I am reading a proof on the diagonalization theorem (Linear Algebra: A Modern Introduction) and I'm struggling with understanding the first part. The theorem is as follows:
The Diagonalization Theorem
Let A be an n x n matrix whose distinct eigenvalues are $\lambda_{1}, \lambda_{2},...,\lambda_{k}$. The following statements are equivalent:
a. A is diagonalizable
b. The union $\beta$ of the bases of the eigenspaces of A (as in Theorem 4.24) contains n vectors.
c. The algebraic multiplicity of each eigenvalue equals its geometric multiplicity.
The proof of the first part is as follows:
Proof (a)=>(b) If A is diagonalizable, then it has n linearly independent eigenvectors, by Theorem 4.23. If $n_{i}$ of these eigenvectors correspond to the eigenvalue $\lambda_{i}$, then $\beta_{i}$ contains at least $n_{i}$ vectors. (We already know that these $n_{i}$ vectors are linearly independent; the only thing that might prevent them from being a basis for $E_{\lambda_{i}}$ is that they might not span it.) Thus $\beta$ contains at least n vectors. But, by theorem 4.24, $\beta$ is a linearly independent set in $\Re^{n}$; hence it contains exactly n vectors.
Why might the $n_{i}$ vectors not span $E_{\lambda_{i}}$? Shouldn't the eigenvectors of an eigenvalue always span the corresponding eigenspace as an eigenspace is defined as the collection of all eigenvectors corresponding to $\lambda$ together with the zero vector.
## 1 Answer
That the $n_i$ vectors span the eigenspace $E_{\lambda_i}$ does require a proof: After all, they aren't all the eigenvectors corresponding to $\lambda_i$, only a small subset of these eigenvectors.
We might throw a bit of light on the question by generalizing the setting a bit: Assume $A$ is not diagonalizable, and let $\beta$ be maximal set of eigenvectors of $A$. If $\beta_i$ are those vectors in $\beta$ corresponding to the eigenvalue $\lambda_i$, it is still true that $\beta_i$ spans $E_{\lambda_i}$. (Otherwise consider a vector in $E_{\lambda_i}$ not in the span of $\beta_i$, and show that this could be added to $\beta$, contradicting the maximality of $\beta$.)
What goes wrong with the proof, if $A$ is not diagonalizable, is that $\beta$ will have fewer than $n$ members. But that is a different issue.
• So the book says that the eigenvectors of $\lambda_{i}$ might not span $E_{\lambda_{i}}$ simply because they haven't proved it yet. – Ruben23630 May 23 '17 at 20:39
• That does indeed seem to be the case. – Harald Hanche-Olsen May 23 '17 at 20:50
| 688
| 2,622
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4
| 4
|
CC-MAIN-2020-16
|
latest
|
en
| 0.870793
|
# Diagonalization proof - Do eigenvectors of an eigenvalue always span the corresponding eigenspace? I am reading a proof on the diagonalization theorem (Linear Algebra: A Modern Introduction) and I'm struggling with understanding the first part. The theorem is as follows:
The Diagonalization Theorem
Let A be an n x n matrix whose distinct eigenvalues are $\lambda_{1}, \lambda_{2},...,\lambda_{k}$. The following statements are equivalent:
a. A is diagonalizable
b. The union $\beta$ of the bases of the eigenspaces of A (as in Theorem 4.24) contains n vectors. c. The algebraic multiplicity of each eigenvalue equals its geometric multiplicity. The proof of the first part is as follows:
Proof (a)=>(b) If A is diagonalizable, then it has n linearly independent eigenvectors, by Theorem 4.23. If $n_{i}$ of these eigenvectors correspond to the eigenvalue $\lambda_{i}$, then $\beta_{i}$ contains at least $n_{i}$ vectors. (We already know that these $n_{i}$ vectors are linearly independent; the only thing that might prevent them from being a basis for $E_{\lambda_{i}}$ is that they might not span it.) Thus $\beta$ contains at least n vectors.
|
But, by theorem 4.24, $\beta$ is a linearly independent set in $\Re^{n}$; hence it contains exactly n vectors.
|
https://math.stackexchange.com/questions/1951595/lighthouse-problem-my-answer-does-not-match-the-key
| 1,603,731,225,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2020-45/segments/1603107891428.74/warc/CC-MAIN-20201026145305-20201026175305-00433.warc.gz
| 412,430,089
| 32,906
|
# Lighthouse problem: my answer does not match the key
Here is the problem:
A lighthouse is located in Lake Michigan, 300 feet from the nearest point on shore. The light rotates at a constant rate, making k complete revolutions per hour. At the moment that the beam hits a point on the shore 500 feet from the lighthouse, the point of light is traveling along the shoreline at a rate of 2,500 feet per minute. Find k.
x
A___________B
| /
| /
300| /500
| /
|θ /
|/
L
(Lighthouse)
Here is my solution:
$$x = 500sin(\theta)$$ $$\frac{dx}{dt} = 500cos(\theta)\frac{d\theta}{dt}$$ When the light hits B, $\frac{dx}{dt} = 2500$ and $cos(\theta) = \frac35$.
Thus, $$2500 = 500\cdot\frac35\frac{d\theta}{dt}$$ $$\frac{d\theta}{dt} = \frac{25}3$$
But, the light makes k revolutions per hour, which we will convert to per minute. $$\frac{d\theta}{dt} = \frac{2\pi k}{60}$$
Now, we have $$k = \frac{250}{\pi}$$
But when I looked at the key, it was $\frac{90}\pi$. Where was I wrong?
• Hint: $x = 500 \sin(\theta)$ is correct at the given $\theta$, but $500$ is not a constant as $\theta$ varies, so you can not derive it against time as such. – dxiv Oct 3 '16 at 3:57
Your expression for $x$ is wrong for general $\theta$, since the 500 is not a constant. It should be $$x=300\tan\theta$$ Then $$\frac{dx}{dt}=300\sec^2\theta\cdot\frac{d\theta}{dt}=2500$$ Since $\cos\theta=\frac35$, $\sec^2\theta=\frac{25}9$. $$300\cdot\frac{25}9\cdot\frac{d\theta}{dt}=2500$$ $$\frac{d\theta}{dt}=3=\frac{2\pi k}{60}$$ $$k=\frac{90}{\pi}$$
| 525
| 1,560
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.34375
| 4
|
CC-MAIN-2020-45
|
latest
|
en
| 0.796395
|
# Lighthouse problem: my answer does not match the key
Here is the problem:
A lighthouse is located in Lake Michigan, 300 feet from the nearest point on shore. The light rotates at a constant rate, making k complete revolutions per hour. At the moment that the beam hits a point on the shore 500 feet from the lighthouse, the point of light is traveling along the shoreline at a rate of 2,500 feet per minute. Find k.
x
A___________B
| /
| /
300| /500
| /
|θ /
|/
L
(Lighthouse)
Here is my solution:
$$x = 500sin(\theta)$$ $$\frac{dx}{dt} = 500cos(\theta)\frac{d\theta}{dt}$$ When the light hits B, $\frac{dx}{dt} = 2500$ and $cos(\theta) = \frac35$. Thus, $$2500 = 500\cdot\frac35\frac{d\theta}{dt}$$ $$\frac{d\theta}{dt} = \frac{25}3$$
But, the light makes k revolutions per hour, which we will convert to per minute. $$\frac{d\theta}{dt} = \frac{2\pi k}{60}$$
Now, we have $$k = \frac{250}{\pi}$$
But when I looked at the key, it was $\frac{90}\pi$. Where was I wrong? • Hint: $x = 500 \sin(\theta)$ is correct at the given $\theta$, but $500$ is not a constant as $\theta$ varies, so you can not derive it against time as such. – dxiv Oct 3 '16 at 3:57
Your expression for $x$ is wrong for general $\theta$, since the 500 is not a constant. It should be $$x=300\tan\theta$$ Then $$\frac{dx}{dt}=300\sec^2\theta\cdot\frac{d\theta}{dt}=2500$$ Since $\cos\theta=\frac35$, $\sec^2\theta=\frac{25}9$.
|
$$300\cdot\frac{25}9\cdot\frac{d\theta}{dt}=2500$$ $$\frac{d\theta}{dt}=3=\frac{2\pi k}{60}$$ $$k=\frac{90}{\pi}$$
|
http://math.stackexchange.com/questions/217621/big-omega-proof/217623
| 1,469,688,704,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257828009.82/warc/CC-MAIN-20160723071028-00068-ip-10-185-27-174.ec2.internal.warc.gz
| 175,218,113
| 16,425
|
Big Omega Proof
If $f_1(x)$ and $f_2(x)$ are functions from the set of positive integers to the set of positive real numbers and $f_1(x)$ and $f_2(x)$ are both $\Omega(g(x))$, is $(f_1 − f_2)(x)$ also $\Omega(g(x))$?
How do I prove/disprove this?
-
No. It is not. For instance, consider $f_1(x) = f_2(x) = x^2$ and $g(x) = x$.
For a slightly non-trivial example, consider $$f_1(x) = x^3 + x^2 + x + 1, f_2(x) = x^3 + x^2 + 1 \text{ and }g(x) = x^2$$ We have $f_1(x),f_2(x) \in \Omega(g(x))$ but $f_1(x) -f_2(x) = x \notin \Omega(g(x))$.
| 223
| 540
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.671875
| 4
|
CC-MAIN-2016-30
|
latest
|
en
| 0.711243
|
Big Omega Proof
If $f_1(x)$ and $f_2(x)$ are functions from the set of positive integers to the set of positive real numbers and $f_1(x)$ and $f_2(x)$ are both $\Omega(g(x))$, is $(f_1 − f_2)(x)$ also $\Omega(g(x))$? How do I prove/disprove this? -
No. It is not. For instance, consider $f_1(x) = f_2(x) = x^2$ and $g(x) = x$.
|
For a slightly non-trivial example, consider $$f_1(x) = x^3 + x^2 + x + 1, f_2(x) = x^3 + x^2 + 1 \text{ and }g(x) = x^2$$ We have $f_1(x),f_2(x) \in \Omega(g(x))$ but $f_1(x) -f_2(x) = x \notin \Omega(g(x))$.
|
http://math.stackexchange.com/questions/70106/search-the-or-of-negation-between-boolean-algebra/70114
| 1,469,731,175,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257828313.74/warc/CC-MAIN-20160723071028-00286-ip-10-185-27-174.ec2.internal.warc.gz
| 156,459,186
| 17,544
|
# Search the OR of negation between boolean algebra
I have this formula $$(a\cdot b)+(\neg a\cdot \neg b)$$ At first I thought this kind of $a+\neg a = 1$ so the answer is 1, but then I realized $(\neg a\cdot \neg b) \neq \neg (a\cdot b)$.
I try to do De Morgan for each $(a\cdot b)$ and $(\neg a\cdot \neg b)$ so it will be $$(\neg a+\neg b) + (a + b)$$
am I doing it wrong?
(I'm sorry for my bad English)
Best Regards
-
## 2 Answers
The last part is incorrect, certainly. If you use De Morgan's Law on $a\cdot b$, you will get $\neg(\neg a + \neg b)$; and if you use De Morgan's law on $\neg a\cdot \neg b$, you will get $\neg(a+b)$, rather than $(\neg a+\neg b)$ and $(a+b)$.
So you would write that what you have is equivalent to $$\neg(\neg a+\neg b) + \neg(a+ b)$$ If you try using De Morgan's Law again, you will get $$\neg\bigl( (\neg a + \neg b)\cdot (a+b)\bigr).$$
In fact, I don't think you can simplify what you have. What you have is an "if and only if": it is true if both $a$ and $b$ are true, or if both $a$ and $b$ are false. It is neither a tautology, nor a contradiction.
-
this result is equal to $\bar a\bar b +ab$ disjunctive form...maybe that is a little bit simpler – pedja Oct 5 '11 at 17:18
thanks a lot, the clarification of unable to do of simplification is what I expected for :) – giripp Oct 5 '11 at 17:27
@pedja: That's what he started with: $(a\cdot b) + (\neg a\cdot \neg b)$. – Arturo Magidin Oct 5 '11 at 18:11
@Arturo,I was focused on your answer so I overlooked that fact :) – pedja Oct 5 '11 at 18:20
If you make Karnaugh map (see picture bellow) you will see that your expression is minimal disjunctive form and can't be of more simpler form.
-
| 552
| 1,697
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.0625
| 4
|
CC-MAIN-2016-30
|
latest
|
en
| 0.833833
|
# Search the OR of negation between boolean algebra
I have this formula $$(a\cdot b)+(\neg a\cdot \neg b)$$ At first I thought this kind of $a+\neg a = 1$ so the answer is 1, but then I realized $(\neg a\cdot \neg b) \neq \neg (a\cdot b)$. I try to do De Morgan for each $(a\cdot b)$ and $(\neg a\cdot \neg b)$ so it will be $$(\neg a+\neg b) + (a + b)$$
am I doing it wrong? (I'm sorry for my bad English)
Best Regards
-
## 2 Answers
The last part is incorrect, certainly. If you use De Morgan's Law on $a\cdot b$, you will get $\neg(\neg a + \neg b)$; and if you use De Morgan's law on $\neg a\cdot \neg b$, you will get $\neg(a+b)$, rather than $(\neg a+\neg b)$ and $(a+b)$. So you would write that what you have is equivalent to $$\neg(\neg a+\neg b) + \neg(a+ b)$$ If you try using De Morgan's Law again, you will get $$\neg\bigl( (\neg a + \neg b)\cdot (a+b)\bigr).$$
In fact, I don't think you can simplify what you have. What you have is an "if and only if": it is true if both $a$ and $b$ are true, or if both $a$ and $b$ are false. It is neither a tautology, nor a contradiction.
|
-
this result is equal to $\bar a\bar b +ab$ disjunctive form...maybe that is a little bit simpler – pedja Oct 5 '11 at 17:18
thanks a lot, the clarification of unable to do of simplification is what I expected for :) – giripp Oct 5 '11 at 17:27
@pedja: That's what he started with: $(a\cdot b) + (\neg a\cdot \neg b)$.
|
https://gamedev.stackexchange.com/questions/201982/how-to-rotate-parent-object-to-align-child-rotation-with-a-separate-game-object
| 1,718,602,264,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198861696.51/warc/CC-MAIN-20240617024959-20240617054959-00326.warc.gz
| 249,505,451
| 37,270
|
# How to rotate parent object to align child rotation with a separate game object?
I have attached two images with the desired start and end positions.
The Brown Circle is the target rotation. The Orange Square is the parent (the black dot is the pivot point) and the Blue Rounded Square is the child.
How would I calculate the rotation of the parent to align the child to the target rotation?
Assuming you have a bearing angle for each object relative to its parent, this is just subtraction.
We want:
parent_angle + child_angle = target_angle
parent_angle = target_angle - child_angle
So in your example, the child is rotated about 45 degrees clockwise from its parent - call that -45 - and the target is unrotated at 0 degrees, so that gives:
parent_angle = 0 - (-45)
= +45
So the parent needs to be rotated 45 degrees counter-clockwise to compensate for the child's rotation and match it to the target.
This expression can wrap around to values outside the -180 to 180 or 0 to 360 or -pi to pi etc. range you might be using, but you can wrap the result with an angle difference function, if that matters for your use case. Prior Q&A covers how to write such a function, if your math library does not offer one built-in.
• Thank you! That is exactly what I was looking for. I initially thought the distance from the parent to the child would affect the final rotation but you cleared that up. Commented Aug 3, 2022 at 8:04
• Be sure to click the checkmark on one of these answers to mark it as "accepted". No hard feelings if you choose the other one — they did beat me to it. 😉 Commented Aug 3, 2022 at 9:34
I think this might be one of those things that seems trickier than it really is. Since rotations are rigid transformations of space, all vectors are affected equally no matter where in the plane they are. What that means is that we can completely ignore the fact that the pivot point is the parent position, and just think about how much the child needs to rotate.
And so, all you need to do is find the angle between the current orientation and the desired one. For example, the angle between the child's horizontal vector and the target's horizontal vector. You then rotate the parent by that amount, and done!
An important note is that you need to express both vectors in the same coordinate system. So, if the child is expressed in the local space of its parent, you might need to use a transformation matrix or something. The exact steps will of course depend on how all this information is made available in whatever tool you are using.
• Thank you, this is what I was aiming for. I didn't know the formula to achieve it. Commented Aug 3, 2022 at 5:04
| 602
| 2,687
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.734375
| 4
|
CC-MAIN-2024-26
|
latest
|
en
| 0.943089
|
# How to rotate parent object to align child rotation with a separate game object? I have attached two images with the desired start and end positions. The Brown Circle is the target rotation. The Orange Square is the parent (the black dot is the pivot point) and the Blue Rounded Square is the child. How would I calculate the rotation of the parent to align the child to the target rotation? Assuming you have a bearing angle for each object relative to its parent, this is just subtraction. We want:
parent_angle + child_angle = target_angle
parent_angle = target_angle - child_angle
So in your example, the child is rotated about 45 degrees clockwise from its parent - call that -45 - and the target is unrotated at 0 degrees, so that gives:
parent_angle = 0 - (-45)
= +45
So the parent needs to be rotated 45 degrees counter-clockwise to compensate for the child's rotation and match it to the target. This expression can wrap around to values outside the -180 to 180 or 0 to 360 or -pi to pi etc. range you might be using, but you can wrap the result with an angle difference function, if that matters for your use case. Prior Q&A covers how to write such a function, if your math library does not offer one built-in. • Thank you! That is exactly what I was looking for. I initially thought the distance from the parent to the child would affect the final rotation but you cleared that up. Commented Aug 3, 2022 at 8:04
• Be sure to click the checkmark on one of these answers to mark it as "accepted". No hard feelings if you choose the other one — they did beat me to it. 😉 Commented Aug 3, 2022 at 9:34
I think this might be one of those things that seems trickier than it really is. Since rotations are rigid transformations of space, all vectors are affected equally no matter where in the plane they are. What that means is that we can completely ignore the fact that the pivot point is the parent position, and just think about how much the child needs to rotate. And so, all you need to do is find the angle between the current orientation and the desired one. For example, the angle between the child's horizontal vector and the target's horizontal vector. You then rotate the parent by that amount, and done! An important note is that you need to express both vectors in the same coordinate system. So, if the child is expressed in the local space of its parent, you might need to use a transformation matrix or something. The exact steps will of course depend on how all this information is made available in whatever tool you are using. • Thank you, this is what I was aiming for. I didn't know the formula to achieve it.
|
Commented Aug 3, 2022 at 5:04
|
http://money.stackexchange.com/questions/17041/how-do-bond-funds-have-a-higher-return-that-the-sum-of-their-bond-parts?answertab=oldest
| 1,462,062,426,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-18/segments/1461860113541.87/warc/CC-MAIN-20160428161513-00099-ip-10-239-7-51.ec2.internal.warc.gz
| 199,594,123
| 17,443
|
# How do bond funds have a higher return that the sum of their bond parts?
In looking at bond funds recently, I've seen many that have a return of about 7% annualized over 5 years. However, their holdings are bonds that return 2-5% annualized. How is it possible that the bond fund returns a higher percentage than any one of its holdings?
-
Bond funds also buy and sell bonds, not just hold bonds and distribute the collected interest (less expenses) to the shareholders of the funds. So the coupon rate of a bond is usually different from the return of the bond. – Dilip Sarwate Oct 1 '12 at 16:35
A bond's value can exceed its face value as rates fall, and that bond's interest rate is above market rates. In the case of the fund you are looking at, rates have dropped over the last 5 years, creating gains in the bonds value within the fund.
-
Suppose you have a 10-year bond that you bought a few years ago at a 10% interest rate. Then there was a financial crisis, the Federal Reserve went on a bond-buying spree, and now interest rates are lower. A new bond with the same amount of risk as your bond and the same amount of time left would now run about a 5% interest rate. (Insert some mathematical fiddling here to adjust for the actual time at which the bond issues interest payments, in order to come to a mathematically-equivalent bond.)
If you sold your bond today, you could sell it at a price which implies a 5% rate of return. That means more money for you!
(If this confuses you, imagine having a series of 10% off coupons, replacing them with 5% off coupons, and pocketing the difference. In fact, if you think of a bond as a coupon on future-dollars, that's a very good way to understand it.)
If the interest rate drops by 1%, then the value of the bond will change by... well, the continuous-compounding formula is `P*e^(r*t)`. If we plug in 0.01 and 5 years to `e^(r*t)` (ie: r = 0.01, t = 5), then we get 1.0512, or a 5.1% change in value. So for our bond, we see a 5.1% gain immediately. If you have a 30-year bond, a drop in the interest rate by 1% would mean a 34.9% return all at once.
(Tip: For low interest rates and short times, fudge it and just use `r * t`, or `1% * 5` in our case. It'll be lower than the actual change.)
Of course, there's the question of what you'll do with the money. Consider our example when the interest rate dropped to 5%: if you just bought another bond, you'd still only get a 5% rate of return. So it's not like you're going to earn any more or less money by doing this - hence why a bond is called a fixed-income investment. You've just moved your money-making forward in time: you haven't actually generated any more of it. Likewise, if interest rates drop in the future, the face value of your bond will fall, but you'll still get the same amount of money if you hold it until maturity.
Your bond fund will own many bonds, with varying amounts of time until they mature, and will continually roll over these bonds. They will probably give you a statistic on the average maturity of their bond holdings; this average maturity will affect how much the value of the bond fund is affected by interest rate changes, the bonds' interest rate risk. You can find short-term, medium-term and long-term bond funds. Longer-term funds are more sensitive to interest rates; short-term funds are less sensitive.
-
You wrote `Likewise, if interest rates drop in the future, the face value of your bond will fall`, but is this correct? Interest rates and bond price are negatively correlated. – LePressentiment Jul 1 '15 at 20:22
| 874
| 3,586
|
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.609375
| 4
|
CC-MAIN-2016-18
|
latest
|
en
| 0.963216
|
# How do bond funds have a higher return that the sum of their bond parts? In looking at bond funds recently, I've seen many that have a return of about 7% annualized over 5 years. However, their holdings are bonds that return 2-5% annualized. How is it possible that the bond fund returns a higher percentage than any one of its holdings? -
Bond funds also buy and sell bonds, not just hold bonds and distribute the collected interest (less expenses) to the shareholders of the funds. So the coupon rate of a bond is usually different from the return of the bond. – Dilip Sarwate Oct 1 '12 at 16:35
A bond's value can exceed its face value as rates fall, and that bond's interest rate is above market rates. In the case of the fund you are looking at, rates have dropped over the last 5 years, creating gains in the bonds value within the fund. -
Suppose you have a 10-year bond that you bought a few years ago at a 10% interest rate. Then there was a financial crisis, the Federal Reserve went on a bond-buying spree, and now interest rates are lower. A new bond with the same amount of risk as your bond and the same amount of time left would now run about a 5% interest rate. (Insert some mathematical fiddling here to adjust for the actual time at which the bond issues interest payments, in order to come to a mathematically-equivalent bond.) If you sold your bond today, you could sell it at a price which implies a 5% rate of return. That means more money for you! (If this confuses you, imagine having a series of 10% off coupons, replacing them with 5% off coupons, and pocketing the difference. In fact, if you think of a bond as a coupon on future-dollars, that's a very good way to understand it.) If the interest rate drops by 1%, then the value of the bond will change by... well, the continuous-compounding formula is `P*e^(r*t)`. If we plug in 0.01 and 5 years to `e^(r*t)` (ie: r = 0.01, t = 5), then we get 1.0512, or a 5.1% change in value. So for our bond, we see a 5.1% gain immediately. If you have a 30-year bond, a drop in the interest rate by 1% would mean a 34.9% return all at once. (Tip: For low interest rates and short times, fudge it and just use `r * t`, or `1% * 5` in our case. It'll be lower than the actual change.) Of course, there's the question of what you'll do with the money. Consider our example when the interest rate dropped to 5%: if you just bought another bond, you'd still only get a 5% rate of return. So it's not like you're going to earn any more or less money by doing this - hence why a bond is called a fixed-income investment. You've just moved your money-making forward in time: you haven't actually generated any more of it. Likewise, if interest rates drop in the future, the face value of your bond will fall, but you'll still get the same amount of money if you hold it until maturity. Your bond fund will own many bonds, with varying amounts of time until they mature, and will continually roll over these bonds. They will probably give you a statistic on the average maturity of their bond holdings; this average maturity will affect how much the value of the bond fund is affected by interest rate changes, the bonds' interest rate risk. You can find short-term, medium-term and long-term bond funds. Longer-term funds are more sensitive to interest rates; short-term funds are less sensitive. -
You wrote `Likewise, if interest rates drop in the future, the face value of your bond will fall`, but is this correct? Interest rates and bond price are negatively correlated.
|
– LePressentiment Jul 1 '15 at 20:22
|
https://math.stackexchange.com/questions/3948304/bound-the-minimum-eigenvalue-of-a-symmetric-matrix-via-matrix-norms
| 1,719,072,562,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198862404.32/warc/CC-MAIN-20240622144011-20240622174011-00361.warc.gz
| 333,695,871
| 35,879
|
# Bound the minimum eigenvalue of a symmetric matrix via matrix norms
I'm reading a paper in which the authors prove an inequality of the following form:
$$\lVert H-H'\rVert_2 \leq \lVert H-H'\rVert_F \leq \epsilon \tag 1$$
Here $$H$$ and $$H'$$ are symmetric real matrices ($$H'$$ has all positive eigenvalues, if that matters), and the norms are the $$L_2$$ matrix norm and the Frobenius norm, respectively. With no justification the authors then claim:
$$\lambda_\text{min}(H) \geq \lambda_\text{min}(H') - \epsilon \tag 2$$
where $$\lambda_\text{min}$$ is the minimum eigenvalue of a matrix.
I can't see how to justify this, or even if (2) is even intended to be deduced from the (1). Here is the paper - the end of the proof of Lemma 3.2, page 6.
This answer is based on this one. Below we will be working with some arbitrary inner product, and when we take the norm of a matrix, this means the operator norm associated with the vector norm we're using. We have:
Theorem. If $$A$$ and $$B$$ are real symmetric, then:
$$\lambda_\text{min} (A) \geq \lambda_\text{min} (B) - \lVert A-B\rVert$$ $$\lambda_\text{max} (A) \leq \lambda_\text{max} (B) + \lVert A-B\rVert$$
To prove this, the key is the expression $$x^T Mx$$, where $$M$$ is a symmetric matrix and $$x$$ has unit norm. We need two lemmas about this expression.
Lemma 1. For any matrix $$M$$ and any unit norm $$x$$: $$-\lVert M\rVert \leq x^T Mx\leq \lVert M\rVert$$ Proof. Simple application of Cauchy-Schwartz and of the definition of an operator norm: $$|x^TMx|\leq\lVert x\rVert \lVert Mx\rVert\leq \lVert x\rVert^2 \lVert M\rVert=\lVert M\rVert$$
Lemma 2. For any symmetric matrix $$M$$ and any unit norm $$x$$: $$\lambda_\text{min}(M) \leq x^T M x \leq \lambda_\text{max}(M)$$ and the bounds are attained as $$x$$ varies over the unit sphere.
Proof. Let $$M=P^TDP$$ where $$P$$ is orthogonal and $$D$$ is diagonal. Then $$x^TMx = (Px)^TD(Px)$$ As $$x$$ varies over the unit sphere, $$Px$$ varies also over the entire unit sphere, therefore the range of the latter expression above is simply the range of $$y^TDy$$ as $$y$$ ranges over the unit sphere. By the rearrangement inequality and some other simple arguments, the minimum is attained when $$y$$ is an eigenvector associated with $$\lambda_\text{min}(M)$$ and the maximum when $$y$$ is an eigenvector associated with $$\lambda_\text{max}(M)$$.
Finally we can prove the theorem. For any unit norm $$x$$, we have
$$x^TAx = x^TBx + x^T(A-B)x$$
By applying Lemma 1 to the second term and Lemma 2 to the first term, the minimum of the left hand side is at least $$\lambda_\text{min} (B)-\lVert A-B\rVert$$. By Lemma 2, we know that the minimum of the left hand side is equal to $$\lambda_\text{min} (A)$$. A similar argument shows the other inequality in the theorem.
| 845
| 2,803
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 38, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.734375
| 4
|
CC-MAIN-2024-26
|
latest
|
en
| 0.787102
|
# Bound the minimum eigenvalue of a symmetric matrix via matrix norms
I'm reading a paper in which the authors prove an inequality of the following form:
$$\lVert H-H'\rVert_2 \leq \lVert H-H'\rVert_F \leq \epsilon \tag 1$$
Here $$H$$ and $$H'$$ are symmetric real matrices ($$H'$$ has all positive eigenvalues, if that matters), and the norms are the $$L_2$$ matrix norm and the Frobenius norm, respectively. With no justification the authors then claim:
$$\lambda_\text{min}(H) \geq \lambda_\text{min}(H') - \epsilon \tag 2$$
where $$\lambda_\text{min}$$ is the minimum eigenvalue of a matrix. I can't see how to justify this, or even if (2) is even intended to be deduced from the (1). Here is the paper - the end of the proof of Lemma 3.2, page 6. This answer is based on this one. Below we will be working with some arbitrary inner product, and when we take the norm of a matrix, this means the operator norm associated with the vector norm we're using. We have:
Theorem. If $$A$$ and $$B$$ are real symmetric, then:
$$\lambda_\text{min} (A) \geq \lambda_\text{min} (B) - \lVert A-B\rVert$$ $$\lambda_\text{max} (A) \leq \lambda_\text{max} (B) + \lVert A-B\rVert$$
To prove this, the key is the expression $$x^T Mx$$, where $$M$$ is a symmetric matrix and $$x$$ has unit norm. We need two lemmas about this expression. Lemma 1. For any matrix $$M$$ and any unit norm $$x$$: $$-\lVert M\rVert \leq x^T Mx\leq \lVert M\rVert$$ Proof. Simple application of Cauchy-Schwartz and of the definition of an operator norm: $$|x^TMx|\leq\lVert x\rVert \lVert Mx\rVert\leq \lVert x\rVert^2 \lVert M\rVert=\lVert M\rVert$$
Lemma 2. For any symmetric matrix $$M$$ and any unit norm $$x$$: $$\lambda_\text{min}(M) \leq x^T M x \leq \lambda_\text{max}(M)$$ and the bounds are attained as $$x$$ varies over the unit sphere. Proof. Let $$M=P^TDP$$ where $$P$$ is orthogonal and $$D$$ is diagonal. Then $$x^TMx = (Px)^TD(Px)$$ As $$x$$ varies over the unit sphere, $$Px$$ varies also over the entire unit sphere, therefore the range of the latter expression above is simply the range of $$y^TDy$$ as $$y$$ ranges over the unit sphere. By the rearrangement inequality and some other simple arguments, the minimum is attained when $$y$$ is an eigenvector associated with $$\lambda_\text{min}(M)$$ and the maximum when $$y$$ is an eigenvector associated with $$\lambda_\text{max}(M)$$. Finally we can prove the theorem. For any unit norm $$x$$, we have
$$x^TAx = x^TBx + x^T(A-B)x$$
By applying Lemma 1 to the second term and Lemma 2 to the first term, the minimum of the left hand side is at least $$\lambda_\text{min} (B)-\lVert A-B\rVert$$.
|
By Lemma 2, we know that the minimum of the left hand side is equal to $$\lambda_\text{min} (A)$$.
|
http://math.stackexchange.com/questions/180895/subgroup-criterion/180897
| 1,469,485,278,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257824395.52/warc/CC-MAIN-20160723071024-00095-ip-10-185-27-174.ec2.internal.warc.gz
| 157,877,901
| 17,676
|
# Subgroup criterion.
I've been reading some stuff about algebra in my free time, and I think I understand most of the stuff but I'm having trouble with the exercises. Specifically, the following:
Prove that a nonempty subset $H$ of a group $G$ is a subgroup if for all $x, y \in H$, the element $xy^{-1}$ is also in H.
Proving that the identity is in $H$ is easy: just take $x=y$, so $x x^{-1} = 1 \in H$. However, I'm having trouble showing that multiplication is closed and that each element in $H$ has an inverse. Can anyone give some hints?
-
Hint: For inverses use the fact that the identity is in $H$. Once you have inverses, you can get products using that fact. – Matt Aug 10 '12 at 1:57
1. by given condition for any $x\in H$ we have $xx^{-1}=e$ is in $H$, denote identity element by $e$
2. take any $x\in H$ and $e\in H$ so by the given condition $ex^{-1}=x^{-1}\in H$ so every element of $H$ has inverse in $H$.
3. take any $x,y\in H$ as $y^{-1}\in H$ so by given condition $x(y^{-1})^{-1}=xy\in H$, which proves the closure property.
-
So much for hints... – Matt Aug 10 '12 at 2:02
For any $b \in H$, $eb^{-1} = b^{-1} \in H$, so every element has an inverse in $H$. To show closure, note that if $a, b \in H$, then $b^{-1} \in H$ as we have just shown. So $a(b^{-1})^{-1} = ab \in H$. Hence, $H$ is a subgroup.
-
| 426
| 1,334
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.1875
| 4
|
CC-MAIN-2016-30
|
latest
|
en
| 0.915523
|
# Subgroup criterion. I've been reading some stuff about algebra in my free time, and I think I understand most of the stuff but I'm having trouble with the exercises. Specifically, the following:
Prove that a nonempty subset $H$ of a group $G$ is a subgroup if for all $x, y \in H$, the element $xy^{-1}$ is also in H.
Proving that the identity is in $H$ is easy: just take $x=y$, so $x x^{-1} = 1 \in H$. However, I'm having trouble showing that multiplication is closed and that each element in $H$ has an inverse. Can anyone give some hints? -
Hint: For inverses use the fact that the identity is in $H$. Once you have inverses, you can get products using that fact. – Matt Aug 10 '12 at 1:57
1. by given condition for any $x\in H$ we have $xx^{-1}=e$ is in $H$, denote identity element by $e$
2. take any $x\in H$ and $e\in H$ so by the given condition $ex^{-1}=x^{-1}\in H$ so every element of $H$ has inverse in $H$. 3. take any $x,y\in H$ as $y^{-1}\in H$ so by given condition $x(y^{-1})^{-1}=xy\in H$, which proves the closure property. -
So much for hints... – Matt Aug 10 '12 at 2:02
For any $b \in H$, $eb^{-1} = b^{-1} \in H$, so every element has an inverse in $H$. To show closure, note that if $a, b \in H$, then $b^{-1} \in H$ as we have just shown. So $a(b^{-1})^{-1} = ab \in H$.
|
Hence, $H$ is a subgroup.
|
https://mathematica.stackexchange.com/questions/9915/how-to-arrange-a-list-on-a-triangular-pattern
| 1,660,972,394,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00457.warc.gz
| 367,190,095
| 66,677
|
# How to arrange a list on a triangular pattern?
I've made a multiplication table with this:
Then I removed the repeated permutations with:
gg = Range[1, 10]; Subsets[gg, {2}] // TableForm
Now, when the multiplication table has the repeated permutations removed, this table is now similar to a triangle:
(1,2) (1,3) (1,4) (1,5) (1,6) (1,7) (1,8) (1,9) (1,10) - 10 multiplications
(2,3) (2,4) (2,5) (2,6) (2,7) (2,8) (2,9) (2,10) - 9 multiplications
...
(9,10) - 1 multiplication
How can I build a triangular table based on this? I'm trying to use arithmetical progression precedure like:
gg = Range[1, 10]; x = 1; gb = {};
While[x <= 9, AppendTo[gb, Take[Subsets[gg, {2}], {H, J}]]; x++]
I was thinking on doing
gg = Range[1, 10]; x = 1; gb = {};
H=0;J=0;
While[x <= 9, AppendTo[gb, Take[Subsets[gg, {2}], {H=H+1, J=J-1}]]; x++]
You can create your triangular table in a far simpler manner as follows:
Table[{i, j}, {i, 10}, {j, i + 1, 10}]
(* {
{{1, 2}, {1, 3}, {1, 4}, {1, 5}, {1, 6}, {1, 7}, {1, 8}, {1, 9}, {1, 10}},
{{2, 3}, {2, 4}, {2, 5}, {2, 6}, {2, 7}, {2, 8}, {2, 9}, {2, 10}},
{{3, 4}, {3, 5}, {3, 6}, {3, 7}, {3, 8}, {3, 9}, {3, 10}},
{{4, 5}, {4, 6}, {4, 7}, {4, 8}, {4, 9}, {4, 10}},
{{5, 6}, {5, 7}, {5, 8}, {5, 9}, {5, 10}},
{{6, 7}, {6, 8}, {6, 9}, {6, 10}},
{{7, 8}, {7, 9}, {7, 10}},
{{8, 9}, {8, 10}},
{{9, 10}}
} *)
If this was just a simple example and your real application requires you to partition some arbitrary list of length $N(N+1)/2$ into a triangular list with sublists of length $\{N, N-1, ..., 1\}$, then you can use Mr.Wizard's dynP from here:
dynP[Subsets[gg, {2}], Range[9, 1, -1]] // TableForm
I think R.M covered your actual application, but here is some tangential thinking that may interest you or someone who finds this question.
n = 10;
UpperTriangularize@Array[Times, {n, n}]
$\begin{array}{cccccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ 0 & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20 \\ 0 & 0 & 9 & 12 & 15 & 18 & 21 & 24 & 27 & 30 \\ 0 & 0 & 0 & 16 & 20 & 24 & 28 & 32 & 36 & 40 \\ 0 & 0 & 0 & 0 & 25 & 30 & 35 & 40 & 45 & 50 \\ 0 & 0 & 0 & 0 & 0 & 36 & 42 & 48 & 54 & 60 \\ 0 & 0 & 0 & 0 & 0 & 0 & 49 & 56 & 63 & 70 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 64 & 72 & 80 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 81 & 90 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 \end{array}$
(i = 1; NestList[Rest@# + Range[++i, n] &, Range@n, n - 1])
$\begin{array}{cccccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20 \\ & & 9 & 12 & 15 & 18 & 21 & 24 & 27 & 30 \\ & & & 16 & 20 & 24 & 28 & 32 & 36 & 40 \\ & & & & 25 & 30 & 35 & 40 & 45 & 50 \\ & & & & & 36 & 42 & 48 & 54 & 60 \\ & & & & & & 49 & 56 & 63 & 70 \\ & & & & & & & 64 & 72 & 80 \\ & & & & & & & & 81 & 90 \\ & & & & & & & & & 100 \end{array}$
Table[i j, {i, n}, {j, i, n}]
$\begin{array}{cccccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20 \\ & & 9 & 12 & 15 & 18 & 21 & 24 & 27 & 30 \\ & & & 16 & 20 & 24 & 28 & 32 & 36 & 40 \\ & & & & 25 & 30 & 35 & 40 & 45 & 50 \\ & & & & & 36 & 42 & 48 & 54 & 60 \\ & & & & & & 49 & 56 & 63 & 70 \\ & & & & & & & 64 & 72 & 80 \\ & & & & & & & & 81 & 90 \\ & & & & & & & & & 100 \end{array}$
n = 5000;
UpperTriangularize@Array[Times, {n, n}]; // Timing
(i = 1; NestList[Rest@# + Range[++i, n] &, Range@n, n - 1]); // Timing
Table[i j, {i, n}, {j, i, n}]; // Timing
{0.047, Null}
{0.047, Null}
{3.12, Null}
n = 5000;
UpperTriangularize@Array[Times, {n, n}];
MaxMemoryUsed[]
Quit[]
214923080
n = 5000;
(i = 1; NestList[Rest@# + Range[++i, n] &, Range@n, n - 1]);
MaxMemoryUsed[]
Quit[]
65867144
n = 5000;
Table[i j, {i, n}, {j, i, n}];
MaxMemoryUsed[]
Quit[]
315167560
• Yes. Your answer is also very useful. Aug 27, 2012 at 19:44
| 1,799
| 3,785
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.734375
| 4
|
CC-MAIN-2022-33
|
longest
|
en
| 0.752631
|
# How to arrange a list on a triangular pattern? I've made a multiplication table with this:
Then I removed the repeated permutations with:
gg = Range[1, 10]; Subsets[gg, {2}] // TableForm
Now, when the multiplication table has the repeated permutations removed, this table is now similar to a triangle:
(1,2) (1,3) (1,4) (1,5) (1,6) (1,7) (1,8) (1,9) (1,10) - 10 multiplications
(2,3) (2,4) (2,5) (2,6) (2,7) (2,8) (2,9) (2,10) - 9 multiplications
...
(9,10) - 1 multiplication
How can I build a triangular table based on this? I'm trying to use arithmetical progression precedure like:
gg = Range[1, 10]; x = 1; gb = {};
While[x <= 9, AppendTo[gb, Take[Subsets[gg, {2}], {H, J}]]; x++]
I was thinking on doing
gg = Range[1, 10]; x = 1; gb = {};
H=0;J=0;
While[x <= 9, AppendTo[gb, Take[Subsets[gg, {2}], {H=H+1, J=J-1}]]; x++]
You can create your triangular table in a far simpler manner as follows:
Table[{i, j}, {i, 10}, {j, i + 1, 10}]
(* {
{{1, 2}, {1, 3}, {1, 4}, {1, 5}, {1, 6}, {1, 7}, {1, 8}, {1, 9}, {1, 10}},
{{2, 3}, {2, 4}, {2, 5}, {2, 6}, {2, 7}, {2, 8}, {2, 9}, {2, 10}},
{{3, 4}, {3, 5}, {3, 6}, {3, 7}, {3, 8}, {3, 9}, {3, 10}},
{{4, 5}, {4, 6}, {4, 7}, {4, 8}, {4, 9}, {4, 10}},
{{5, 6}, {5, 7}, {5, 8}, {5, 9}, {5, 10}},
{{6, 7}, {6, 8}, {6, 9}, {6, 10}},
{{7, 8}, {7, 9}, {7, 10}},
{{8, 9}, {8, 10}},
{{9, 10}}
} *)
If this was just a simple example and your real application requires you to partition some arbitrary list of length $N(N+1)/2$ into a triangular list with sublists of length $\{N, N-1, ..., 1\}$, then you can use Mr.Wizard's dynP from here:
dynP[Subsets[gg, {2}], Range[9, 1, -1]] // TableForm
I think R.M covered your actual application, but here is some tangential thinking that may interest you or someone who finds this question.
|
n = 10;
UpperTriangularize@Array[Times, {n, n}]
$\begin{array}{cccccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ 0 & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20 \\ 0 & 0 & 9 & 12 & 15 & 18 & 21 & 24 & 27 & 30 \\ 0 & 0 & 0 & 16 & 20 & 24 & 28 & 32 & 36 & 40 \\ 0 & 0 & 0 & 0 & 25 & 30 & 35 & 40 & 45 & 50 \\ 0 & 0 & 0 & 0 & 0 & 36 & 42 & 48 & 54 & 60 \\ 0 & 0 & 0 & 0 & 0 & 0 & 49 & 56 & 63 & 70 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 64 & 72 & 80 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 81 & 90 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 \end{array}$
(i = 1; NestList[Rest@# + Range[++i, n] &, Range@n, n - 1])
$\begin{array}{cccccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20 \\ & & 9 & 12 & 15 & 18 & 21 & 24 & 27 & 30 \\ & & & 16 & 20 & 24 & 28 & 32 & 36 & 40 \\ & & & & 25 & 30 & 35 & 40 & 45 & 50 \\ & & & & & 36 & 42 & 48 & 54 & 60 \\ & & & & & & 49 & 56 & 63 & 70 \\ & & & & & & & 64 & 72 & 80 \\ & & & & & & & & 81 & 90 \\ & & & & & & & & & 100 \end{array}$
Table[i j, {i, n}, {j, i, n}]
$\begin{array}{cccccccccc} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20 \\ & & 9 & 12 & 15 & 18 & 21 & 24 & 27 & 30 \\ & & & 16 & 20 & 24 & 28 & 32 & 36 & 40 \\ & & & & 25 & 30 & 35 & 40 & 45 & 50 \\ & & & & & 36 & 42 & 48 & 54 & 60 \\ & & & & & & 49 & 56 & 63 & 70 \\ & & & & & & & 64 & 72 & 80 \\ & & & & & & & & 81 & 90 \\ & & & & & & & & & 100 \end{array}$
n = 5000;
UpperTriangularize@Array[Times, {n, n}]; // Timing
(i = 1; NestList[Rest@# + Range[++i, n] &, Range@n, n - 1]); // Timing
Table[i j, {i, n}, {j, i, n}]; // Timing
{0.047, Null}
{0.047, Null}
{3.12, Null}
n = 5000;
UpperTriangularize@Array[Times, {n, n}];
MaxMemoryUsed[]
Quit[]
214923080
n = 5000;
(i = 1; NestList[Rest@# + Range[++i, n] &, Range@n, n - 1]);
MaxMemoryUsed[]
Quit[]
65867144
n = 5000;
Table[i j, {i, n}, {j, i, n}];
MaxMemoryUsed[]
Quit[]
315167560
• Yes.
|
https://dsp.stackexchange.com/questions/75231/how-to-solve-image-denoising-with-total-variation-prior-using-admm?noredirect=1
| 1,713,953,940,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-18/segments/1712296819089.82/warc/CC-MAIN-20240424080812-20240424110812-00086.warc.gz
| 190,842,151
| 41,072
|
# How to Solve Image Denoising with Total Variation Prior Using ADMM?
I was looking at some articles or Wikipedia on denoising images using the Total Variation norm. The setup is the Rudin Osher Fatemi (ROF) scheme, and the corresponding equation is:
$$F(u)=\int_{\Omega}|D u|+\lambda \int_{\Omega}(K u-f)^{2} d x$$
Some of the sources mentioned using the ADMM optimizer to solve this denoising problem. But I was hoping that someone might be able to direct me to some code to show an implementation of this approach. Code in MATLAB, Julia, or Python would be excellent, just something to get started with.
Thanks.
## Formulation of the Denoising Problem
The problem is given by:
$$\arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda \operatorname{TV} \left( x \right) = \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\| D x \right\|}_{1}$$
Where $$D$$ is the column stacked derivative operator.
In the above I used the Anisotropic TV Norm.
The ADMM problem will be formulated as:
\begin{aligned} \arg \min_{x, z} \quad & \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\| z \right\|}_{1} \\ \text{subject to} \quad & D x = z \end{aligned}
The ADMM will have 3 steps:
1. vX = mC \ (vY + (paramRho * mD.' * (vZ - vU)));.
2. vZ = ProxL1(mD * vX + vU, paramLambda / paramRho);.
3. vU = vU + mD * vX - vZ;.
Where mC = decomposition(mI + paramRho * (mD.' * mD), 'chol');.
I coded this solution in a MATLAB function - SolveProxTvAdmm().
I compared it to a reference by CVX:
The full code is available on my StackExchange Signal Processing Q75231 GitHub Repository (Look at the SignalProcessing\Q75231 folder).
Remark: For the Deblurring problem, open a new question and I will post a code for it as well.
I have done a bit of this myself and you'd need to adapt.
There is a Douglas Rachford self implemented and a primal dual approach here implemented in Recovery of Fusion Frame Structured Signal via Compressed Sensing.
Note that Clarice Poon (Bath University) had some nice tutorials on it.
Another source is the Numerical Tours from Gabriel Peyre. See Denoising by Sobolev and Total Variation Regularization.
• This is great information. I will take a look at the references you cited. Ihave not really been able to find too many tutorials on this topic, so I really appreciate the pointer in the right direction. May 18, 2021 at 17:51
| 698
| 2,414
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.71875
| 4
|
CC-MAIN-2024-18
|
latest
|
en
| 0.795869
|
# How to Solve Image Denoising with Total Variation Prior Using ADMM? I was looking at some articles or Wikipedia on denoising images using the Total Variation norm. The setup is the Rudin Osher Fatemi (ROF) scheme, and the corresponding equation is:
$$F(u)=\int_{\Omega}|D u|+\lambda \int_{\Omega}(K u-f)^{2} d x$$
Some of the sources mentioned using the ADMM optimizer to solve this denoising problem. But I was hoping that someone might be able to direct me to some code to show an implementation of this approach. Code in MATLAB, Julia, or Python would be excellent, just something to get started with. Thanks. ## Formulation of the Denoising Problem
The problem is given by:
$$\arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda \operatorname{TV} \left( x \right) = \arg \min_{x} \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\| D x \right\|}_{1}$$
Where $$D$$ is the column stacked derivative operator. In the above I used the Anisotropic TV Norm.
|
The ADMM problem will be formulated as:
\begin{aligned} \arg \min_{x, z} \quad & \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda {\left\| z \right\|}_{1} \\ \text{subject to} \quad & D x = z \end{aligned}
The ADMM will have 3 steps:
1. vX = mC \ (vY + (paramRho * mD.'
|
https://mathematica.stackexchange.com/questions/59135/how-can-mathematica-help-me-to-find-a-real-radical-expression-for-roots-of-this?noredirect=1
| 1,720,786,126,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-30/segments/1720763514387.30/warc/CC-MAIN-20240712094214-20240712124214-00814.warc.gz
| 251,302,231
| 41,426
|
# How can Mathematica help me to find a real radical expression for roots of this polynomial?
The polynomial $P(x)=x^4-4x^2-2x+1$ has 4 real roots (this can be clearly checked by plotting). But solving $P(x)=0$, using Solve[x^4-4x^2-2x+1==0,x] leads to x=-1 and 3 other roots which (however they're not) seems complex number as they are represented in terms of $i$ (the imaginary unit):
But I want to have the roots represented in a real closed radical expression. I mean neither in trigonometric representation (such as the output of ComplexExpand[]) nor with any $i$'s in it. Is there any simplification function or procedure that can help? I've tried Simplify[] and FullSimplify[] and their various options. Even I've combined them with some other expression manipulation functions such as Expand[], Refine[] and ComplexExpand[], but I could not reach my goal.
• Does Re help? Commented Sep 8, 2014 at 6:12
• I remember from my university days in the distant past that some real roots of certain quadric equations have no representation in terms of combinations of radicals and rational numbers. This may be such a case. Commented Sep 8, 2014 at 6:12
• See Casus Irreducibilis and/or this notebook. The roots can be expressed without the imaginary unit, if you are willing to accept trig functions - just hit your output with ComplexExpand. Commented Sep 8, 2014 at 6:43
• To state more strongly what other comments note: what you want cannot be done. Commented Sep 8, 2014 at 15:08
As stated in the comments, it is not possible to get the exact roots in the desired form.
However, it is possible to get them in any arbitrary precision (100-digit precision in the following example):
Rationalize[N[Solve[x^4 - 4 x^2 - 2 x + 1 == 0, x], 100], 0]
{{x -> -1},
{x -> 148845339002531569051638627576397352071169969019092/
68589588442601901747538163421051848353066356246917},
{x -> -(53523407249914715278682495786123302627314986818931/
36135304532328057570688476882478262824336711940507)},
{x -> 24375413419753596751919874615468440606916785148237/
78350372608103708508713638007384658562420306644851}}
x^4 - 4 x^2 - 2 x + 1 /. % // N
{0., 3.3459*10^-99, 4.60681*10^-100, 1.13719*10^-100}
Radicals are traditional bronze-age mathematics, but they aren't the nicest way to express the roots of a polynomial. Radical expressions are numerically unstable when a discriminant is near zero, and they often require complex arithmetic even for real results, as you've seen.
In the space age, we have Root objects, one of the real gems of Mathematica.
roots = Solve[x^4 - 4 x^2 - 2 x + 1 == 0, x, Cubics -> False]
{{x -> -1}, {x -> Root[1 - 3 #1 - #1^2 + #1^3 &, 1]}, {x -> Root[1 - 3 #1 - #1^2 + #1^3 &, 2]}, {x -> Root[1 - 3 #1 - #1^2 + #1^3 &, 3]}}
Root objects look a little peculiar, but when constructed using exact constants (as above) Mathematica treats them as exact numbers. It can, for example, tell you that the imaginary part of each root is exactly zero:
Im[x /. roots]
{0, 0, 0, 0}
When you want numerical results, Root objects are not prone to the numerical instability of radical expressions, nor will real roots contain the imaginary artifacts of incomplete cancellation.
roots // N
{{x -> -1.}, {x -> -1.48119}, {x -> 0.311108}, {x -> 2.17009}}
| 1,013
| 3,334
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.671875
| 4
|
CC-MAIN-2024-30
|
latest
|
en
| 0.865796
|
# How can Mathematica help me to find a real radical expression for roots of this polynomial?
The polynomial $P(x)=x^4-4x^2-2x+1$ has 4 real roots (this can be clearly checked by plotting).
|
But solving $P(x)=0$, using Solve[x^4-4x^2-2x+1==0,x] leads to x=-1 and 3 other roots which (however they're not) seems complex number as they are represented in terms of $i$ (the imaginary unit):
But I want to have the roots represented in a real closed radical expression.
|
https://quantumcomputing.stackexchange.com/questions/5240/whats-my-computational-basis-if-i-want-to-define-a-unitary-operator-that-implem
| 1,718,876,757,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-26/segments/1718198861916.26/warc/CC-MAIN-20240620074431-20240620104431-00486.warc.gz
| 406,811,011
| 39,914
|
What's my computational basis if I want to define a unitary operator that implements a function such as $f(i) = 2^{i+1} \text{mod 21}$?
I know I must define $$U_f$$, the unitary operator, on the computational basis. But what's my computational basis here?
Presumably you want to work with qubits? So the usual computation basis applies: $$|0\rangle$$ and $$|1\rangle$$ for a single qubit, and a composite basis of $$|x\rangle$$ for $$x\in\{0,1\}^n$$ when composing $$n$$ qubits.
What I guess you're asking is how you translate your problem onto qubits. For that, you need to make the decimal values correspond to particular bit values. The conventional way of doing this is using the binary string $$x$$ like a binary number, which corresponds to a decimal value. There are a number of different numbering conventions you can pick from but, for example, you might have $$x=x_0x_1x_2\ldots x_{n-1},$$ meaning that the corresponding decimal value is $$x_{n-1}+2x_{n-2}+4x_{n-3}+\ldots+2^{n-2}x_1+2^{n-1}x_0.$$ $$n$$ bits lets you represent any decimal number 0 to $$2^n-1$$.
It depends on the $$i$$ you want to apply this function, which is to be represented as a bitstring (in this case, I guess an unsigned integer representation) that will be one of the computational basis. It can be a particular one or just a superposition. You need to define the registers (the one containing $$i$$, garbage register containing intermediary results and the one containing the result). Mathematically, your operator will have the following effect if you uncompute intermediary results : $$U_f | i \rangle | 0 \rangle_g | 0 \rangle_f = | i \rangle | 0 \rangle_g | f(i) \rangle_f$$
| 450
| 1,668
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.75
| 4
|
CC-MAIN-2024-26
|
latest
|
en
| 0.831639
|
What's my computational basis if I want to define a unitary operator that implements a function such as $f(i) = 2^{i+1} \text{mod 21}$? I know I must define $$U_f$$, the unitary operator, on the computational basis. But what's my computational basis here? Presumably you want to work with qubits? So the usual computation basis applies: $$|0\rangle$$ and $$|1\rangle$$ for a single qubit, and a composite basis of $$|x\rangle$$ for $$x\in\{0,1\}^n$$ when composing $$n$$ qubits. What I guess you're asking is how you translate your problem onto qubits. For that, you need to make the decimal values correspond to particular bit values. The conventional way of doing this is using the binary string $$x$$ like a binary number, which corresponds to a decimal value. There are a number of different numbering conventions you can pick from but, for example, you might have $$x=x_0x_1x_2\ldots x_{n-1},$$ meaning that the corresponding decimal value is $$x_{n-1}+2x_{n-2}+4x_{n-3}+\ldots+2^{n-2}x_1+2^{n-1}x_0.$$ $$n$$ bits lets you represent any decimal number 0 to $$2^n-1$$. It depends on the $$i$$ you want to apply this function, which is to be represented as a bitstring (in this case, I guess an unsigned integer representation) that will be one of the computational basis. It can be a particular one or just a superposition. You need to define the registers (the one containing $$i$$, garbage register containing intermediary results and the one containing the result).
|
Mathematically, your operator will have the following effect if you uncompute intermediary results : $$U_f | i \rangle | 0 \rangle_g | 0 \rangle_f = | i \rangle | 0 \rangle_g | f(i) \rangle_f$$
|
http://math.stackexchange.com/questions/837209/did-i-integrate-this-correctly
| 1,469,386,789,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257824133.26/warc/CC-MAIN-20160723071024-00245-ip-10-185-27-174.ec2.internal.warc.gz
| 165,234,333
| 18,560
|
# Did I integrate this correctly?
The question was: $$\int 2x^2 (x^3-4)^6\ dx$$
My answer was $\dfrac{(x^3-4)^7}{7} + C$.
If my answer is wrong please show me the correct method. The textbook doesn't have answers so I turn to my trusty stackexchange users.
-
It might be useful to you to check out this guide on mathjax to make your questions look nice and pretty :D – DanZimm Jun 17 '14 at 12:33
Why are you asking this? Can't you simply differentiate the supposed result and check whehter you get the functions that was integrated?? – DonAntonio Jun 17 '14 at 12:34
@user151764, you have done write but not taken constant $2/21$ – lavkush Jun 17 '14 at 12:34
There are some online calculators to help finding integrals, limits, series, or derivatives. – João Jun 17 '14 at 12:37
What does that have to do with anything at all, @DanZimm ? This is indefinite integration = anti-differentiation. – DonAntonio Jun 17 '14 at 12:39
Letting $$u = x^3 - 4 \implies du = 3x^2\,dx \iff \frac{du}{3} = x^2\,dx \iff \dfrac 23\,du = 2x^2$$
$$\int 2x^2(x^3 - 4)^6 \,dx = \int (\underbrace{x^3 - 4}_{\large u})^6(\underbrace{2x^2\,dx}_{\large \frac 23 \,du}) = \int u^6 \left(\frac 23 du\right) = \dfrac 23\int u^6\,du$$
So you'll need to multiply your result by $\dfrac 23$: $$\dfrac 23\cdot \frac{u^{7}}{7} + c = \dfrac 2{21}(x^3 - 4)^7 + c$$
-
Okay thank you I see what I did wrong. I forgot to multiply by 2/3 to make du/dx equal 2x^2. – user151764 Jun 17 '14 at 12:39
You're welcome! – amWhy Jun 17 '14 at 12:41
Let $u=x^3-4\;\Rightarrow\;du=3x^2\ dx$, then \begin{align} \require{cancel} \int 2x^2(x^3-4)^6\ dx&=\int 2\color{red}{\cancel{\color{black}{x^2}}}u^6\cdot\frac{du}{3\color{red}{\cancel{\color{black}{x^2}}}}\\ &=\frac23\int u^6\ du\\ &=\frac23\cdot\frac17u^7+C\\ &=\frac2{21}(x^3-4)^7+C. \end{align}
-
Thanks mate really helpful! – user151764 Jun 17 '14 at 12:40
@user151764 You're welcome. $\ddot\smile$ – Tunk-Fey Jun 17 '14 at 12:40
| 747
| 1,949
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.25
| 4
|
CC-MAIN-2016-30
|
latest
|
en
| 0.779711
|
# Did I integrate this correctly? The question was: $$\int 2x^2 (x^3-4)^6\ dx$$
My answer was $\dfrac{(x^3-4)^7}{7} + C$. If my answer is wrong please show me the correct method. The textbook doesn't have answers so I turn to my trusty stackexchange users. -
It might be useful to you to check out this guide on mathjax to make your questions look nice and pretty :D – DanZimm Jun 17 '14 at 12:33
Why are you asking this? Can't you simply differentiate the supposed result and check whehter you get the functions that was integrated?? – DonAntonio Jun 17 '14 at 12:34
@user151764, you have done write but not taken constant $2/21$ – lavkush Jun 17 '14 at 12:34
There are some online calculators to help finding integrals, limits, series, or derivatives. – João Jun 17 '14 at 12:37
What does that have to do with anything at all, @DanZimm ? This is indefinite integration = anti-differentiation. – DonAntonio Jun 17 '14 at 12:39
Letting $$u = x^3 - 4 \implies du = 3x^2\,dx \iff \frac{du}{3} = x^2\,dx \iff \dfrac 23\,du = 2x^2$$
$$\int 2x^2(x^3 - 4)^6 \,dx = \int (\underbrace{x^3 - 4}_{\large u})^6(\underbrace{2x^2\,dx}_{\large \frac 23 \,du}) = \int u^6 \left(\frac 23 du\right) = \dfrac 23\int u^6\,du$$
So you'll need to multiply your result by $\dfrac 23$: $$\dfrac 23\cdot \frac{u^{7}}{7} + c = \dfrac 2{21}(x^3 - 4)^7 + c$$
-
Okay thank you I see what I did wrong. I forgot to multiply by 2/3 to make du/dx equal 2x^2. – user151764 Jun 17 '14 at 12:39
You're welcome!
|
– amWhy Jun 17 '14 at 12:41
Let $u=x^3-4\;\Rightarrow\;du=3x^2\ dx$, then \begin{align} \require{cancel} \int 2x^2(x^3-4)^6\ dx&=\int 2\color{red}{\cancel{\color{black}{x^2}}}u^6\cdot\frac{du}{3\color{red}{\cancel{\color{black}{x^2}}}}\\ &=\frac23\int u^6\ du\\ &=\frac23\cdot\frac17u^7+C\\ &=\frac2{21}(x^3-4)^7+C.
|
https://math.stackexchange.com/questions/631182/how-to-prove-that-n-sum-d-mid-n-frac-mudd-sum-d2-mid-n-mud-sigm
| 1,566,653,122,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-35/segments/1566027321140.82/warc/CC-MAIN-20190824130424-20190824152424-00427.warc.gz
| 557,044,255
| 31,045
|
# How to prove that $n\sum_{d\mid n}\frac{|\mu(d)|}{d}=\sum_{d^2\mid n}\mu(d)\sigma\left(\frac{n}{d^2}\right)$?
This is problem 11 part b in chapter 3 of Tom M. Apostol's "Introduction to Analytic Number Theory". A variation on Euler's totient function is defined as $$\varphi_1(n) = n \sum_{d \mid n} \frac{|\mu(d)|}{d}$$ The question asks to show that $$\varphi_1(n) = \sum_{d^2 \mid n} \mu(d) \sigma\left( \frac{n}{d^2} \right)$$ My attempt so far: I have proved in part (a) of the same question that $$\varphi_1(n) = n \prod_{p \mid n}\left(1 + \frac{1}{p} \right)$$ And so in an attempt to equate these two expressions I write \begin{eqnarray} \varphi_1(n) &=& n \prod_{p \mid n}\frac{p + 1}{p} \\ &=& n \left(\prod_{\substack{p \mid n \\ p^2 \mid n}}\frac{p + 1}{p}\right)\left(\prod_{\substack{p \mid n \\ p^2 \nmid n}}\frac{p + 1}{p}\right) \end{eqnarray} Let $s = \prod_{\substack{p \mid n \\ p^2 \mid n}} p$ and $r = \prod_{\substack{p \mid n \\ p^2 \nmid n}} p$. Then \begin{eqnarray} \varphi_1(n) &=& n \frac{\sigma(s)}{s} \frac{\sigma(r)}{r} \\ &=& \sigma(s) \sigma(r) \frac{n}{sr} \end{eqnarray} Using $N = \mu * \sigma$ where $*$ is the Dirichlet convolution and $N(n) = n$ we obtain \begin{eqnarray} \varphi_1(n) &=& \sigma(s) \sigma(r) \sum_{d \mid \frac{n}{sr}}\mu(d) \sigma\left( \frac{n}{srd} \right) \end{eqnarray} Because $(r, \frac{n}{srd}) = 1$ we can simplify the expression: \begin{eqnarray} \varphi_1(n) &=& \sigma(s) \sum_{d \mid \frac{n}{sr}}\mu(d) \sigma\left( \frac{n}{sd} \right) \end{eqnarray} We know that $\frac{n}{r}$ is square, and that all $d$ that contribute (a non zero value) to the above sum are squarefree, because otherwise $\mu(d) = 0$, and so the sum over $d$ such that $d \mid \frac{n}{sr}$ is the same as the sum over $d$ such that $d^2 \mid n$. This brings us to where I am stuck: \begin{eqnarray} \varphi_1(n) &=& \sigma(s) \sum_{d^2 \mid n}\mu(d) \sigma\left( \frac{n}{sd} \right) \end{eqnarray} I cannot see how to proceed from here. Hints or answers for how to proceed from where I currently am, or how to show what is required using a different approach would be appreciated.
Since $\varphi_1$ is multiplicative it suffices to show this for prime powers $p^k$. The product definition in part (a) yields $$\varphi_1(p^k)=p^k\prod_{p|n}(1+p^{-1})=p^k+p^{k-1}$$ The formula in part (b) yields $$\sum_{d^2|n}\mu(d)\sigma(\frac{n}{d^2})=\sigma(p^k)-\sigma(p^{k-2})=\frac{(p^{k+1}-1)-(p^{k-1}-1)}{p-1}=p^k+p^{k-1}.$$ so that the two definitions are the same.
| 960
| 2,508
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.03125
| 4
|
CC-MAIN-2019-35
|
latest
|
en
| 0.63462
|
# How to prove that $n\sum_{d\mid n}\frac{|\mu(d)|}{d}=\sum_{d^2\mid n}\mu(d)\sigma\left(\frac{n}{d^2}\right)$? This is problem 11 part b in chapter 3 of Tom M. Apostol's "Introduction to Analytic Number Theory". A variation on Euler's totient function is defined as $$\varphi_1(n) = n \sum_{d \mid n} \frac{|\mu(d)|}{d}$$ The question asks to show that $$\varphi_1(n) = \sum_{d^2 \mid n} \mu(d) \sigma\left( \frac{n}{d^2} \right)$$ My attempt so far: I have proved in part (a) of the same question that $$\varphi_1(n) = n \prod_{p \mid n}\left(1 + \frac{1}{p} \right)$$ And so in an attempt to equate these two expressions I write \begin{eqnarray} \varphi_1(n) &=& n \prod_{p \mid n}\frac{p + 1}{p} \\ &=& n \left(\prod_{\substack{p \mid n \\ p^2 \mid n}}\frac{p + 1}{p}\right)\left(\prod_{\substack{p \mid n \\ p^2 \nmid n}}\frac{p + 1}{p}\right) \end{eqnarray} Let $s = \prod_{\substack{p \mid n \\ p^2 \mid n}} p$ and $r = \prod_{\substack{p \mid n \\ p^2 \nmid n}} p$. Then \begin{eqnarray} \varphi_1(n) &=& n \frac{\sigma(s)}{s} \frac{\sigma(r)}{r} \\ &=& \sigma(s) \sigma(r) \frac{n}{sr} \end{eqnarray} Using $N = \mu * \sigma$ where $*$ is the Dirichlet convolution and $N(n) = n$ we obtain \begin{eqnarray} \varphi_1(n) &=& \sigma(s) \sigma(r) \sum_{d \mid \frac{n}{sr}}\mu(d) \sigma\left( \frac{n}{srd} \right) \end{eqnarray} Because $(r, \frac{n}{srd}) = 1$ we can simplify the expression: \begin{eqnarray} \varphi_1(n) &=& \sigma(s) \sum_{d \mid \frac{n}{sr}}\mu(d) \sigma\left( \frac{n}{sd} \right) \end{eqnarray} We know that $\frac{n}{r}$ is square, and that all $d$ that contribute (a non zero value) to the above sum are squarefree, because otherwise $\mu(d) = 0$, and so the sum over $d$ such that $d \mid \frac{n}{sr}$ is the same as the sum over $d$ such that $d^2 \mid n$. This brings us to where I am stuck: \begin{eqnarray} \varphi_1(n) &=& \sigma(s) \sum_{d^2 \mid n}\mu(d) \sigma\left( \frac{n}{sd} \right) \end{eqnarray} I cannot see how to proceed from here. Hints or answers for how to proceed from where I currently am, or how to show what is required using a different approach would be appreciated. Since $\varphi_1$ is multiplicative it suffices to show this for prime powers $p^k$.
|
The product definition in part (a) yields $$\varphi_1(p^k)=p^k\prod_{p|n}(1+p^{-1})=p^k+p^{k-1}$$ The formula in part (b) yields $$\sum_{d^2|n}\mu(d)\sigma(\frac{n}{d^2})=\sigma(p^k)-\sigma(p^{k-2})=\frac{(p^{k+1}-1)-(p^{k-1}-1)}{p-1}=p^k+p^{k-1}.$$ so that the two definitions are the same.
|
http://crypto.stackexchange.com/questions/11509/computing-p-and-q-from-private-key
| 1,469,723,730,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257828286.80/warc/CC-MAIN-20160723071028-00163-ip-10-185-27-174.ec2.internal.warc.gz
| 56,281,625
| 19,271
|
Computing p and q from private key
We are given n (public modulus) where n=pq and e (encryption exponent). Then I was able to crack the private key d, using Wieners attack. So now, I have (n,e,d). My question is, is there a way to calculate p and q from this information? If so, any links and explanation would be much appreciated!
-
– Ricky Demer Nov 4 '13 at 16:56
It's actually fairly easy to factor $n$ given $e$ and $d$. Here's the standard way to do this:
• Compute $f = ed - 1$. What's interesting about $f$ is that $x^f \equiv 1\ (\bmod n)$ for (almost) any $x$.
• Write $f$ as $2^s g$ for an odd value $g$.
• Select a random value $a$, and compute $b = a^g \bmod n$.
• If $b = 1$ or $-1$, then go back and select another random value of $a$
• Repeatedly (in practice, up to $s$ times):
• compute $c = b^2 \bmod n$.
• If $c = 1$ then the factors for $n$ are $gcd(n, b-1)$ and $gcd(n, b+1)$
• If $c = -1$, then go back and select another random value of $a$
• Otherwise, set $b = c$, and go through another iteration of the loop.
If you are familiar with the Miller-Rabin primality test, this will look familiar; the logic is the same (except that we use $ed-1$ rather than $n-1$ as the startign place for the exponent)
-
Just to clarify, so when we write $f$ as $2^s g$, do you mean $f=2^s g$? Also, is it $2^s g$ or $2^{s g}$? – hhel uilop Nov 4 '13 at 17:09
@hheluilop: $f = 2^s \times g$; keep on dividing $f$ by two until you get an odd number. – poncho Nov 4 '13 at 17:16
There is no such value as $\gcd(n+1)$. Did you mean $\gcd(n,b+1)$? – cpast May 10 '15 at 19:33
@cpast: yes, of course – poncho May 11 '15 at 21:16
Generally, (n,e,d) is sufficient. Using these three it is possible to decrypt, encrypt, sign and verify any message or signature.
If you still need p and q: NIST SP 800-56B: Recommendation for Pair-Wise Key Establishment Schemes Using Integer Factorization Cryptography, Appendix C Prime Factor Recovery (Normative) contains formula for retrieving p and q, when you know (n,e,d). This formula is useful for instance to convert the private key in (n,e,d) format to CRT format.
Even a tool exists for the job: RSA CRT/SFM Converter.
-
I will definitely check out the tool later. Do you know if the tool can handle 115-135 digit long integers for n and maybe 30-50 for d? – hhel uilop Nov 4 '13 at 17:24
If my memory serves me right, it'll handle any usual lengths. – user4982 Nov 4 '13 at 17:45
The algorithm in the NIST document is clear to me except for step 2. How does one calculate $r$? Is brute force the only option? – Duncan Feb 3 '15 at 8:36
This step is fast to calculate because the other value ($2^t$) is multiple of 2. In fact, t is the number of zero bits on the least significant bits of k. An easy way to calculate r in many big number packages is to shift k right t steps. Either calculate the zeroes or shift one bit a time as long as the value is even. – user4982 Feb 3 '15 at 17:47
| 908
| 2,949
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.96875
| 4
|
CC-MAIN-2016-30
|
latest
|
en
| 0.840993
|
Computing p and q from private key
We are given n (public modulus) where n=pq and e (encryption exponent). Then I was able to crack the private key d, using Wieners attack. So now, I have (n,e,d). My question is, is there a way to calculate p and q from this information? If so, any links and explanation would be much appreciated! -
– Ricky Demer Nov 4 '13 at 16:56
It's actually fairly easy to factor $n$ given $e$ and $d$. Here's the standard way to do this:
• Compute $f = ed - 1$. What's interesting about $f$ is that $x^f \equiv 1\ (\bmod n)$ for (almost) any $x$. • Write $f$ as $2^s g$ for an odd value $g$. • Select a random value $a$, and compute $b = a^g \bmod n$. • If $b = 1$ or $-1$, then go back and select another random value of $a$
• Repeatedly (in practice, up to $s$ times):
• compute $c = b^2 \bmod n$. • If $c = 1$ then the factors for $n$ are $gcd(n, b-1)$ and $gcd(n, b+1)$
• If $c = -1$, then go back and select another random value of $a$
• Otherwise, set $b = c$, and go through another iteration of the loop. If you are familiar with the Miller-Rabin primality test, this will look familiar; the logic is the same (except that we use $ed-1$ rather than $n-1$ as the startign place for the exponent)
-
Just to clarify, so when we write $f$ as $2^s g$, do you mean $f=2^s g$? Also, is it $2^s g$ or $2^{s g}$?
|
– hhel uilop Nov 4 '13 at 17:09
@hheluilop: $f = 2^s \times g$; keep on dividing $f$ by two until you get an odd number.
|
https://math.stackexchange.com/questions/358750/prove-the-following-is-a-tautology
| 1,701,744,114,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-50/segments/1700679100540.62/warc/CC-MAIN-20231205010358-20231205040358-00464.warc.gz
| 435,927,030
| 37,913
|
# Prove the following is a tautology
I was trying to prove this statement is a tautology without using truth tables. Something doesn't add it here as I keep getting stuck. Take a look please!
For statements, P, Q and R prove that that the statement $$[(P \implies Q) \implies R] \lor [\neg P \lor Q]$$
It is possible to do this without truth tables right? Here is what I have so far! :)
The statement $(P \implies Q)$ can be collapsed into $(\neg P \lor Q)$. So we can replace the phrase $[(P \implies Q) \implies R]$ with $(\neg P \lor Q) \implies R$. Again, we can collapse that expression and get $(P \land \neg Q) \lor R)$. From here I am not sure where to go. There isn't even an $R$ in the expression $[\neg P \lor Q]$ ! Help would be greatly appreciated! Thank you :)
• You can use \lor and \land for $\lor$ and $\land$. Apr 11, 2013 at 20:52
• Thank you everyone! Very helpful indeed :) Apr 11, 2013 at 21:14
• You’re welcome. Apr 11, 2013 at 21:21
## 3 Answers
You’ve done much of it. You have
$$\Big((P\land\neg Q)\lor R\Big)\lor(\neg P\lor Q)\;,$$
but you’d be better off backing up a step to
$$\Big(\neg(\neg P\lor Q)\lor R\Big)\lor(\neg P\lor Q)\;.$$
Now rewrite this as
$$\Big(\neg(\neg P\lor Q)\lor(\neg P\lor Q)\Big)\lor R$$
and notice that the big parenthesis is of the form $\neg S\lor S$.
You can instead work directly from what you already have, if you want:
\begin{align*} (P\land\neg Q)\lor(\neg P\lor Q)&\equiv\Big(P\lor(\neg P\lor Q)\Big)\land\Big(\neg Q\lor(\neg P\lor Q)\Big)\\ &\equiv\Big((P\lor\neg P)\lor Q\Big)\land\Big(\neg P\lor(\neg Q\lor Q)\Big)\\ &\equiv(\top\lor Q)\land(\neg P\lor\top)\\ &\equiv\top\land\top\\ &\equiv\top\;. \end{align*}
Applying the same equivalence you used in the first transformation:
From $$((\lnot P \lor Q)\rightarrow R) \lor (P \lor Q)$$
You get $$(\color{blue}{\bf\lnot (\lnot P \lor Q)} \lor R ) \lor \color{blue}{\bf (\lnot P \lor Q)}$$
Which is necessarily true because the law of the excluded middle:
$$\lnot(\lnot P \lor Q) \lor (\lnot P \lor Q)$$ is necessarily true. And $T \lor R$ is necessarily true.
• Is this clear now? You have three statements "or'd", and two of them are necessarily true, together, giving us $T \lor R$: If one of two terms in a disjunction is true, then the disjunction as a whole is necessarily true. $T \lor R =$ True. Hence the statement is indeed a tautology, necessarily true. Apr 11, 2013 at 21:17
• Oh yes! Makes perfect sense! Thank you :) Apr 11, 2013 at 22:03
• When OPs answer like that - it is nice! +1 Apr 12, 2013 at 0:32
• Yes, indeed. nicefella's always responsive and shows work, and such. nicefella is a model "asker" for the site. Apr 12, 2013 at 0:35
You're almost there.
(¬P∨Q)⟹R = ¬(¬P∨Q)∨R
¬(¬P∨Q)vR∨(¬P∨Q)
Let (¬P∨Q) = A. Then you have ¬A v R v A. So you have NOT A or A. Which is always true.
| 958
| 2,838
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.984375
| 4
|
CC-MAIN-2023-50
|
latest
|
en
| 0.708662
|
# Prove the following is a tautology
I was trying to prove this statement is a tautology without using truth tables. Something doesn't add it here as I keep getting stuck. Take a look please! For statements, P, Q and R prove that that the statement $$[(P \implies Q) \implies R] \lor [\neg P \lor Q]$$
It is possible to do this without truth tables right? Here is what I have so far! :)
The statement $(P \implies Q)$ can be collapsed into $(\neg P \lor Q)$. So we can replace the phrase $[(P \implies Q) \implies R]$ with $(\neg P \lor Q) \implies R$. Again, we can collapse that expression and get $(P \land \neg Q) \lor R)$. From here I am not sure where to go. There isn't even an $R$ in the expression $[\neg P \lor Q]$ ! Help would be greatly appreciated! Thank you :)
• You can use \lor and \land for $\lor$ and $\land$. Apr 11, 2013 at 20:52
• Thank you everyone! Very helpful indeed :) Apr 11, 2013 at 21:14
• You’re welcome. Apr 11, 2013 at 21:21
## 3 Answers
You’ve done much of it. You have
$$\Big((P\land\neg Q)\lor R\Big)\lor(\neg P\lor Q)\;,$$
but you’d be better off backing up a step to
$$\Big(\neg(\neg P\lor Q)\lor R\Big)\lor(\neg P\lor Q)\;.$$
Now rewrite this as
$$\Big(\neg(\neg P\lor Q)\lor(\neg P\lor Q)\Big)\lor R$$
and notice that the big parenthesis is of the form $\neg S\lor S$. You can instead work directly from what you already have, if you want:
\begin{align*} (P\land\neg Q)\lor(\neg P\lor Q)&\equiv\Big(P\lor(\neg P\lor Q)\Big)\land\Big(\neg Q\lor(\neg P\lor Q)\Big)\\ &\equiv\Big((P\lor\neg P)\lor Q\Big)\land\Big(\neg P\lor(\neg Q\lor Q)\Big)\\ &\equiv(\top\lor Q)\land(\neg P\lor\top)\\ &\equiv\top\land\top\\ &\equiv\top\;. \end{align*}
Applying the same equivalence you used in the first transformation:
From $$((\lnot P \lor Q)\rightarrow R) \lor (P \lor Q)$$
You get $$(\color{blue}{\bf\lnot (\lnot P \lor Q)} \lor R ) \lor \color{blue}{\bf (\lnot P \lor Q)}$$
Which is necessarily true because the law of the excluded middle:
$$\lnot(\lnot P \lor Q) \lor (\lnot P \lor Q)$$ is necessarily true. And $T \lor R$ is necessarily true. • Is this clear now? You have three statements "or'd", and two of them are necessarily true, together, giving us $T \lor R$: If one of two terms in a disjunction is true, then the disjunction as a whole is necessarily true.
|
$T \lor R =$ True.
|
https://math.stackexchange.com/questions/3233705/random-variable-y-following-uniform-distribution-with-parameter-random-x-that-fo
| 1,571,633,385,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00397.warc.gz
| 582,692,968
| 31,422
|
# Random Variable Y following uniform distribution with parameter Random X that follows geometric.
Random variable X follows geometrical distribution with p=1/4. Random variable Y follows uniform distribution in [-X,X]. I'm looking for P(Y>3/2) and also P(X=2|Y>3/2).I know for a fact that Σ(from k=1 to infinity)zk/k=-log(1-z) for |z|<1.
The probability that $$Y \gt 3/2$$ is actually the probability that $$3/2 \lt Y \lt X$$.
If $$X \lt 1$$ then it is impossible for $$Y$$ to be greater than $$3/2$$.
Therefore $$X$$ must be at least $$2$$ for the probability to even make sense. So we have $$\mathbb{P} \left(Y \gt \frac32 \right) = \sum_{x=2}^\infty \left( \frac14 \right)\left( \frac34 \right) ^{x-1} \left( \frac{x - 3/2}{2x}\right)$$
• No, it is possible for $Y>1.5$ when $X=2$ – Graham Kemp May 20 at 23:19
• @Graham Kemp True, I simply misread it as $Y > 3$ when I solved the problem. Will fix – WaveX May 20 at 23:20
• This sum converges to about $.2159$ – WaveX May 22 at 17:39
| 337
| 990
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.546875
| 4
|
CC-MAIN-2019-43
|
latest
|
en
| 0.784873
|
# Random Variable Y following uniform distribution with parameter Random X that follows geometric. Random variable X follows geometrical distribution with p=1/4. Random variable Y follows uniform distribution in [-X,X]. I'm looking for P(Y>3/2) and also P(X=2|Y>3/2).I know for a fact that Σ(from k=1 to infinity)zk/k=-log(1-z) for |z|<1. The probability that $$Y \gt 3/2$$ is actually the probability that $$3/2 \lt Y \lt X$$. If $$X \lt 1$$ then it is impossible for $$Y$$ to be greater than $$3/2$$. Therefore $$X$$ must be at least $$2$$ for the probability to even make sense.
|
So we have $$\mathbb{P} \left(Y \gt \frac32 \right) = \sum_{x=2}^\infty \left( \frac14 \right)\left( \frac34 \right) ^{x-1} \left( \frac{x - 3/2}{2x}\right)$$
• No, it is possible for $Y>1.5$ when $X=2$ – Graham Kemp May 20 at 23:19
• @Graham Kemp True, I simply misread it as $Y > 3$ when I solved the problem.
|
https://math.stackexchange.com/questions/2922063/how-do-i-convert-differential-form-into-the-canonical-differential-form-in-order
| 1,585,939,440,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00007.warc.gz
| 549,494,704
| 31,833
|
# How do I convert differential form into the canonical differential form in order to find derivative?
Let $$a = x^tWx$$ where $x \in \mathbb{R}^{m\times1}$ and $W \in \mathbb{R}^{m\times m}$.
Then $$\mathrm{d}a = (\mathrm{d}x^tW)x + x^tW\mathrm{d}x$$ $$\mathrm{d}a = (\mathrm{d}x^t)Wx + x^t(\mathrm{d}W)x + x^tW\mathrm{d}x$$ $$\mathrm{d}a = (\mathrm{d}x)^tWx + x^t(\mathrm{d}W)x + x^tW\mathrm{d}x$$
From this I want to find the following derivatives: $$\frac{\mathrm{d}a }{\mathrm{d} W} = \space ? \space$$ $$\frac{\mathrm{d}a }{\mathrm{d} x} = \space ? \space$$
So I set the other differential to zero for each one and I get:
$$\mathrm{d}a = x^t(\mathrm{d}W)x$$ $$\mathrm{d}a = (\mathrm{d}x)^tWx + x^tW\mathrm{d}x$$
And this is where I get stuck because I don't know how to convert these into the canonical differential form described on Wikipedia on the following link under the section "Conversion from differential to derivative form" so that I could get the derivatives: https://en.wikipedia.org/wiki/Matrix_calculus
My failed attempt:
$$\mathrm{d}a = x^t(\mathrm{d}W)x$$ $$\mathrm{d}a = x^t(x^t(\mathrm{d}W)^t)^t$$
which gets $\mathrm{d}W$ to multiply last on the right, but it complicates things even further with all those transposes.
Similarly, I tried:
$$\mathrm{d}a = (\mathrm{d}x)^tWx + x^tW\mathrm{d}x$$ $$\mathrm{d}a = (x^tW^t\mathrm{d}x)^t + x^tW\mathrm{d}x$$
How can I get from the differential form I have now to the canonical one (namely $da = A \space\mathrm{d}x$) so that I can use that to get the derivative?
For real vectors, the scalar product commutes. $$x^Ty = y^Tx$$ Applying this insight to your vector differential \eqalign{ da &= dx^TWx+x^TW\,dx \cr&= x^T(W+W^T)\,dx \cr &= \big((W+W^T)x\big)^Tdx = g^Tdx } So the gradient with respect to $x$ is simply $$g = \frac{\partial a}{\partial x} = (W+W^T)x$$ For real matrices, the scalar/Frobenius product also commutes $$X:Y = Y:X = {\rm tr}(Y^TX)$$ Applying this to your matrix differential \eqalign{ da &= x^T\,dW\,x \cr &= xx^T:dW = G:dW \cr } The gradient with respect to $W$ is $$G = \frac{\partial a}{\partial W} = xx^T$$ Note that my vector gradient $(g)$ has the same shape as the vector differential $(dx)$,
while my matrix gradient $(G)$ has the same shape as the matrix differential $(dW)$.
| 798
| 2,289
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.15625
| 4
|
CC-MAIN-2020-16
|
latest
|
en
| 0.760176
|
# How do I convert differential form into the canonical differential form in order to find derivative? Let $$a = x^tWx$$ where $x \in \mathbb{R}^{m\times1}$ and $W \in \mathbb{R}^{m\times m}$. Then $$\mathrm{d}a = (\mathrm{d}x^tW)x + x^tW\mathrm{d}x$$ $$\mathrm{d}a = (\mathrm{d}x^t)Wx + x^t(\mathrm{d}W)x + x^tW\mathrm{d}x$$ $$\mathrm{d}a = (\mathrm{d}x)^tWx + x^t(\mathrm{d}W)x + x^tW\mathrm{d}x$$
From this I want to find the following derivatives: $$\frac{\mathrm{d}a }{\mathrm{d} W} = \space ? \space$$ $$\frac{\mathrm{d}a }{\mathrm{d} x} = \space ? \space$$
So I set the other differential to zero for each one and I get:
$$\mathrm{d}a = x^t(\mathrm{d}W)x$$ $$\mathrm{d}a = (\mathrm{d}x)^tWx + x^tW\mathrm{d}x$$
And this is where I get stuck because I don't know how to convert these into the canonical differential form described on Wikipedia on the following link under the section "Conversion from differential to derivative form" so that I could get the derivatives: https://en.wikipedia.org/wiki/Matrix_calculus
My failed attempt:
$$\mathrm{d}a = x^t(\mathrm{d}W)x$$ $$\mathrm{d}a = x^t(x^t(\mathrm{d}W)^t)^t$$
which gets $\mathrm{d}W$ to multiply last on the right, but it complicates things even further with all those transposes. Similarly, I tried:
$$\mathrm{d}a = (\mathrm{d}x)^tWx + x^tW\mathrm{d}x$$ $$\mathrm{d}a = (x^tW^t\mathrm{d}x)^t + x^tW\mathrm{d}x$$
How can I get from the differential form I have now to the canonical one (namely $da = A \space\mathrm{d}x$) so that I can use that to get the derivative? For real vectors, the scalar product commutes.
|
$$x^Ty = y^Tx$$ Applying this insight to your vector differential \eqalign{ da &= dx^TWx+x^TW\,dx \cr&= x^T(W+W^T)\,dx \cr &= \big((W+W^T)x\big)^Tdx = g^Tdx } So the gradient with respect to $x$ is simply $$g = \frac{\partial a}{\partial x} = (W+W^T)x$$ For real matrices, the scalar/Frobenius product also commutes $$X:Y = Y:X = {\rm tr}(Y^TX)$$ Applying this to your matrix differential \eqalign{ da &= x^T\,dW\,x \cr &= xx^T:dW = G:dW \cr } The gradient with respect to $W$ is $$G = \frac{\partial a}{\partial W} = xx^T$$ Note that my vector gradient $(g)$ has the same shape as the vector differential $(dx)$,
while my matrix gradient $(G)$ has the same shape as the matrix differential $(dW)$.
|
https://math.stackexchange.com/questions/1732130/discriminant-of-the-cube-quartic
| 1,660,285,060,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00374.warc.gz
| 350,471,708
| 64,724
|
# Discriminant of the cube, quartic...
I was told the discriminant of the cubic is
$$\Delta=-27q^2-4p^3$$
and that $\Delta>0$ means that there are three real roots. Simply put, why is this the discriminant? I ask this because, looking at Cardano's formula, I thought that we want everything inside the square root to be positive to get real roots(just as in quadratic cases).
Namely, $\frac{q^2}{4}+\frac{p^3}{27}>0$. Which is essentially, $27q^2+4p^3>0$. But the discriminant has a minus on it, and I don't see why. Does the derivation involve resultants and whatnot? Will it be rather complex?
I am wondering if there is a simple explanation as to why this is the case. Similarly, for the quartic, quintic discriminants...will I need to go through resultants for them? Or is there a simpler faster way to determine them?
Your discriminant refers to a cubic equation reduced to the form $$y^3+py+q=0$$ and the Cardano formula says that the solutions are: $$y=\sqrt[3]{-\frac{q}{2}+\sqrt{\frac{q^2}{4}+\frac{p^3}{27}}}+\sqrt[3]{-\frac{q}{2}-\sqrt{\frac{q^2}{4}+\frac{p^3}{27}}}$$
Note that if $$\Delta=\frac{q^2}{4}+\frac{p^3}{27}>0$$ the arguments of the cubic roots are real numbers, so the roots give only one real value ( and two complex values), so the other solutions of the equation must be complex numbers and we cannot have three real roots.
The proof that for $\Delta \le 0$ we find three real roots is not so simple, you can see here.
| 419
| 1,451
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.859375
| 4
|
CC-MAIN-2022-33
|
latest
|
en
| 0.894639
|
# Discriminant of the cube, quartic... I was told the discriminant of the cubic is
$$\Delta=-27q^2-4p^3$$
and that $\Delta>0$ means that there are three real roots. Simply put, why is this the discriminant? I ask this because, looking at Cardano's formula, I thought that we want everything inside the square root to be positive to get real roots(just as in quadratic cases). Namely, $\frac{q^2}{4}+\frac{p^3}{27}>0$. Which is essentially, $27q^2+4p^3>0$. But the discriminant has a minus on it, and I don't see why. Does the derivation involve resultants and whatnot? Will it be rather complex? I am wondering if there is a simple explanation as to why this is the case. Similarly, for the quartic, quintic discriminants...will I need to go through resultants for them? Or is there a simpler faster way to determine them?
|
Your discriminant refers to a cubic equation reduced to the form $$y^3+py+q=0$$ and the Cardano formula says that the solutions are: $$y=\sqrt[3]{-\frac{q}{2}+\sqrt{\frac{q^2}{4}+\frac{p^3}{27}}}+\sqrt[3]{-\frac{q}{2}-\sqrt{\frac{q^2}{4}+\frac{p^3}{27}}}$$
Note that if $$\Delta=\frac{q^2}{4}+\frac{p^3}{27}>0$$ the arguments of the cubic roots are real numbers, so the roots give only one real value ( and two complex values), so the other solutions of the equation must be complex numbers and we cannot have three real roots.
|
https://dsp.stackexchange.com/questions/66670/how-can-i-find-expansion-coefficients-of-the-y-vector-in-a-given-basis/66672
| 1,620,764,507,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00603.warc.gz
| 239,023,442
| 37,749
|
# How can I find expansion coefficients of the y vector in a given basis?
Consider the following vectors in $$\mathbb R^4$$:
$$\mathbf{v}^{(0)}=\begin{bmatrix}\frac{1}{2}\\\frac{1}{2}\\\frac{1}{2}\\\frac{1}{2} \end{bmatrix} , \mathbf{v}^{(1)}=\begin{bmatrix}\frac{1}{2}\\\frac{1}{2}\\-\frac{1}{2}\\-\frac{1}{2} \end{bmatrix}, \mathbf{v}^{(2)}=\begin{bmatrix}\frac{1}{2}\\-\frac{1}{2}\\\frac{1}{2}\\-\frac{1}{2} \end{bmatrix} \mathbf{v}^{(3)}=\begin{bmatrix}\frac{1}{2}\\-\frac{1}{2}\\\frac{-1}{2}\\\frac{1}{2} \end{bmatrix}$$
Let $$\mathbf{y}=\begin{bmatrix}0.5\\1.5\\-0.5\\0.5\\\end{bmatrix} \text,$$
what are the expansion coefficients of $$\mathbf{y}$$ in the basis $$\{\mathbf{v}_0, \mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$$?
The basis vectors are orthonormal (very nice!, half the job done), Now project/take inner product of the basis vectors with $$\mathbf{y}$$ to get the expansion coefficients.
In general for an orthnormal basis any $$\mathbf{y}$$ in $$\mathbb R^N$$ can be written as $$\mathbf{y} = \sum_{k=0}^{k=N-1}a_k\mathbf{v_k}$$ where $$a_k = \mathbf{y}^T\mathbf{v_k}$$, in this example
$$\mathbf{y} = \mathbf{v_0} + \mathbf{v_1} - \mathbf{v_2}$$, so the cefficients are [1,1,-1,0] for $$\mathbf{v_0}$$,$$\mathbf{v_1}$$, $$\mathbf{v_2}$$, $$\mathbf{v_3}$$ respectively.
You need to find projection of $$\vec{y}$$ along unit vectors in the direction of each of the basis vectors $$\mathbf v^{(i)}$$.
For finding unit vector in the direction of the vector, you just divide the vector by its magnitude.
And, for finding projection along a unit-vector, you just take the dot-product with the unit-vector.
So, the combined steps boil down to doing the following: $$<\vec{y}, \frac{\mathbf v^{i}}{||\mathbf v^{i}||^{2}_{2}}>$$
In your case the basis-vectors are already normalized, meaning they are already unit vectors, so just take the dot-product(inner product) of vector $$\vec{y}$$ with each of the basis vectors $$\mathbf v^i$$
| 720
| 1,959
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.53125
| 5
|
CC-MAIN-2021-21
|
latest
|
en
| 0.614542
|
# How can I find expansion coefficients of the y vector in a given basis? Consider the following vectors in $$\mathbb R^4$$:
$$\mathbf{v}^{(0)}=\begin{bmatrix}\frac{1}{2}\\\frac{1}{2}\\\frac{1}{2}\\\frac{1}{2} \end{bmatrix} , \mathbf{v}^{(1)}=\begin{bmatrix}\frac{1}{2}\\\frac{1}{2}\\-\frac{1}{2}\\-\frac{1}{2} \end{bmatrix}, \mathbf{v}^{(2)}=\begin{bmatrix}\frac{1}{2}\\-\frac{1}{2}\\\frac{1}{2}\\-\frac{1}{2} \end{bmatrix} \mathbf{v}^{(3)}=\begin{bmatrix}\frac{1}{2}\\-\frac{1}{2}\\\frac{-1}{2}\\\frac{1}{2} \end{bmatrix}$$
Let $$\mathbf{y}=\begin{bmatrix}0.5\\1.5\\-0.5\\0.5\\\end{bmatrix} \text,$$
what are the expansion coefficients of $$\mathbf{y}$$ in the basis $$\{\mathbf{v}_0, \mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$$? The basis vectors are orthonormal (very nice!, half the job done), Now project/take inner product of the basis vectors with $$\mathbf{y}$$ to get the expansion coefficients.
|
In general for an orthnormal basis any $$\mathbf{y}$$ in $$\mathbb R^N$$ can be written as $$\mathbf{y} = \sum_{k=0}^{k=N-1}a_k\mathbf{v_k}$$ where $$a_k = \mathbf{y}^T\mathbf{v_k}$$, in this example
$$\mathbf{y} = \mathbf{v_0} + \mathbf{v_1} - \mathbf{v_2}$$, so the cefficients are [1,1,-1,0] for $$\mathbf{v_0}$$,$$\mathbf{v_1}$$, $$\mathbf{v_2}$$, $$\mathbf{v_3}$$ respectively.
|
http://math.stackexchange.com/questions/12522/period-of-a-function
| 1,469,279,987,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257822598.11/warc/CC-MAIN-20160723071022-00285-ip-10-185-27-174.ec2.internal.warc.gz
| 159,750,178
| 18,576
|
# Period of a function
What is the value of n $\in \mathbb{Z}$ for which the function $\displaystyle f(x) = \frac{\sin nx} { \sin \biggl( \frac{x}{n} \biggr) } \text { has } 4\pi$ as period?
Also could it be possible to solve this if we need $x\pi$ as period ?I am interested in learning the general approach for this particular type of the problem.
-
Edit your question again please – Bryan Yocks Nov 30 '10 at 19:44
@ Bryan Yocks : Done! – Quixotic Nov 30 '10 at 19:45
Thanks – Bryan Yocks Nov 30 '10 at 19:46
## 2 Answers
With $n=2$, $\sin(2x)$ has period $\pi$ and $\sin(x/2)$ has period $4\pi$ so their ratio must have a period of $4\pi$ since the latter period is an integral multiple of the former.
-
so their ratio must have a period of 4π Could you elaborate how? I am not sure how to find the period of the function of the form $f(x)\cdot f(z)$ or $f(z)/f(x)$. – Quixotic Nov 30 '10 at 19:54
We could also add $n= -2 \textrm{ and } n = \pm 1.$ – Derek Jennings Nov 30 '10 at 19:55
@Debanjan: The first numerator repeats it's output values after every $\pi$ units. Hence, it also repeats after every $4\pi$ units. The denominator repeats it's values every $4\pi$ units. So their ratio must repeat after every $4\pi$ units. In general if you have two functions, one with period $m$ and the other with period $n$ their product or ratio (eseentially any function you can manufacture out of only the two) will have period which is lcm$(m,n)$ – Timothy Wagner Nov 30 '10 at 19:58
@Derek: Yes. I thought the OP just needed one value. – Timothy Wagner Nov 30 '10 at 19:58
@Debanjan: First: When you write "$f(x)/g(z)$", you are writing a function of two variables. Is that what you mean? I sincerely doubt it. Second: don't think, check! It's simple enough to plug in and check. – Arturo Magidin Nov 30 '10 at 21:17
You want $$\frac{\sin n(x + 4\pi)}{\sin \frac{x + 4\pi}{n}} = \frac{\sin nx}{\sin \frac{x}{n}}.$$
This is equivalent with $$\sin \frac{x + 4\pi}{n} = \sin \frac{x}{n}.$$
Therefore $\frac{x}{n} = \frac{x + 4\pi}{n} + 2k\pi$ or $\frac{x}{n} = \pi - \frac{x + 4\pi}{n} + 2k\pi$ for some $k \in \mathbb{Z}$. In the first case $x = x + 4 \pi + 2k \pi n$ and thus $n = \frac{4}{2k}$. In the second case $x = \pi n - x - 4\pi + 2k\pi n$ and thus $n = \frac{2x + 4\pi}{2k\pi + \pi}$, which is impossible since this should hold for every $x \in \mathbb{R}$.
Thus $n = \pm 1$ or $\pm 2$.
-
| 827
| 2,408
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.0625
| 4
|
CC-MAIN-2016-30
|
latest
|
en
| 0.853591
|
# Period of a function
What is the value of n $\in \mathbb{Z}$ for which the function $\displaystyle f(x) = \frac{\sin nx} { \sin \biggl( \frac{x}{n} \biggr) } \text { has } 4\pi$ as period? Also could it be possible to solve this if we need $x\pi$ as period ?I am interested in learning the general approach for this particular type of the problem. -
Edit your question again please – Bryan Yocks Nov 30 '10 at 19:44
@ Bryan Yocks : Done! – Quixotic Nov 30 '10 at 19:45
Thanks – Bryan Yocks Nov 30 '10 at 19:46
## 2 Answers
With $n=2$, $\sin(2x)$ has period $\pi$ and $\sin(x/2)$ has period $4\pi$ so their ratio must have a period of $4\pi$ since the latter period is an integral multiple of the former. -
so their ratio must have a period of 4π Could you elaborate how? I am not sure how to find the period of the function of the form $f(x)\cdot f(z)$ or $f(z)/f(x)$. – Quixotic Nov 30 '10 at 19:54
We could also add $n= -2 \textrm{ and } n = \pm 1.$ – Derek Jennings Nov 30 '10 at 19:55
@Debanjan: The first numerator repeats it's output values after every $\pi$ units. Hence, it also repeats after every $4\pi$ units. The denominator repeats it's values every $4\pi$ units. So their ratio must repeat after every $4\pi$ units. In general if you have two functions, one with period $m$ and the other with period $n$ their product or ratio (eseentially any function you can manufacture out of only the two) will have period which is lcm$(m,n)$ – Timothy Wagner Nov 30 '10 at 19:58
@Derek: Yes. I thought the OP just needed one value. – Timothy Wagner Nov 30 '10 at 19:58
@Debanjan: First: When you write "$f(x)/g(z)$", you are writing a function of two variables. Is that what you mean? I sincerely doubt it. Second: don't think, check! It's simple enough to plug in and check. – Arturo Magidin Nov 30 '10 at 21:17
You want $$\frac{\sin n(x + 4\pi)}{\sin \frac{x + 4\pi}{n}} = \frac{\sin nx}{\sin \frac{x}{n}}.$$
This is equivalent with $$\sin \frac{x + 4\pi}{n} = \sin \frac{x}{n}.$$
Therefore $\frac{x}{n} = \frac{x + 4\pi}{n} + 2k\pi$ or $\frac{x}{n} = \pi - \frac{x + 4\pi}{n} + 2k\pi$ for some $k \in \mathbb{Z}$. In the first case $x = x + 4 \pi + 2k \pi n$ and thus $n = \frac{4}{2k}$. In the second case $x = \pi n - x - 4\pi + 2k\pi n$ and thus $n = \frac{2x + 4\pi}{2k\pi + \pi}$, which is impossible since this should hold for every $x \in \mathbb{R}$.
|
Thus $n = \pm 1$ or $\pm 2$.
|
https://math.stackexchange.com/questions/1463140/proof-for-why-a-matrix-multiplied-by-its-transpose-is-positive-semidefinite
| 1,708,480,581,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-10/segments/1707947473360.9/warc/CC-MAIN-20240221002544-20240221032544-00267.warc.gz
| 412,520,885
| 34,915
|
# Proof for why a matrix multiplied by its transpose is positive semidefinite
The top answer to this question says
Moreover if $A$ is regular, then $AA^T$ is also positive definite, since $$x^TAA^Tx=(A^Tx)^T(A^Tx)> 0$$
Suppose $A$ is not regular. It holds that $$x^TAA^Tx=(A^Tx)^T(A^Tx)= \|A^Tx\|^2_2 \ge 0$$ Therefore $AA^T$ is positive semidefinite. Is this argument enough, or am I missing something?
• Yes, that's enough. Oct 3, 2015 at 23:35
• Two comments: 1) Usually, the definition of a positive semidefinite matrix includes the requirement that $A$ is symmetric (or hermitian for complex matrices). You did not check that. 2) Your argument shows that $A^T A$ is positive semidefinite. It does not show that $A^T A$ is not positive definite. Oct 4, 2015 at 10:07
• What is does it mean that "A is regular" in this context?
– Itay
Sep 17, 2016 at 9:32
• It means the same as invertible. So if $A$ is not invertible, then there are $x$ other than $0$ for which $Ax=0$ and thus strict inequality doesn't hold. On the other hand, if $A$ is invertible (thus regular), then $Ax=0$ only holds for $x=0$ and thus strict inequality (definiteness) holds for all $x \ne 0$. I think that more generally in this case regular means that the columns of $A$ are independent. So $A$ doesn't have to be square. Feb 5, 2017 at 20:50
• I think this piece of answer should be added to the top answer of the linked question! Jan 7, 2022 at 15:44
| 445
| 1,435
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.6875
| 4
|
CC-MAIN-2024-10
|
latest
|
en
| 0.925447
|
# Proof for why a matrix multiplied by its transpose is positive semidefinite
The top answer to this question says
Moreover if $A$ is regular, then $AA^T$ is also positive definite, since $$x^TAA^Tx=(A^Tx)^T(A^Tx)> 0$$
Suppose $A$ is not regular. It holds that $$x^TAA^Tx=(A^Tx)^T(A^Tx)= \|A^Tx\|^2_2 \ge 0$$ Therefore $AA^T$ is positive semidefinite. Is this argument enough, or am I missing something? • Yes, that's enough. Oct 3, 2015 at 23:35
• Two comments: 1) Usually, the definition of a positive semidefinite matrix includes the requirement that $A$ is symmetric (or hermitian for complex matrices). You did not check that. 2) Your argument shows that $A^T A$ is positive semidefinite. It does not show that $A^T A$ is not positive definite. Oct 4, 2015 at 10:07
• What is does it mean that "A is regular" in this context? – Itay
Sep 17, 2016 at 9:32
• It means the same as invertible. So if $A$ is not invertible, then there are $x$ other than $0$ for which $Ax=0$ and thus strict inequality doesn't hold.
|
On the other hand, if $A$ is invertible (thus regular), then $Ax=0$ only holds for $x=0$ and thus strict inequality (definiteness) holds for all $x \ne 0$.
|
https://math.stackexchange.com/questions/1555679/unexpectedly-uniformly-continuous-functions
| 1,653,341,839,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00340.warc.gz
| 419,603,461
| 65,452
|
# Unexpectedly uniformly continuous functions
The other day in a exam, I was given the following exercise:
Given $f : [0,1] \to \mathbb{R}$ continuous and such that $f(0) = 0, f(1) = 1$, let $g : \mathbb{R} \to \mathbb{R}$ be $g(x) = [x] + f(x - [x])$. Prove that $g$ is uniformly continuous.
I'm looking for more examples of this kind of exercise to practice with (i.e. functions with uniform continuity that are not as straightforward to prove that they are).
Any continuous function $f$ from a closed interval $[a, b]$ to $[a, b]$ is uniformly continuous. See https://en.wikipedia.org/wiki/Heine–Cantor_theorem.
If $f(a) = a$ and $f(b) = b$ then extending $f$ to a function $\Bbb{R} \to \Bbb{R}$ by taking $f(x) = x$ for $x \not\in [a, b]$ still gives a uniformly continuous function.
• The theorem can be applied: you prove $g$ is continuous on $[0, 1]$ and conclude that it is uniformly continuous on $[0, 1]$. Dec 2, 2015 at 1:06
• You are supposed to prove that it is uniformly continuous on $\mathbb{R}$, which is not compact. Dec 2, 2015 at 1:10
| 334
| 1,058
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.59375
| 4
|
CC-MAIN-2022-21
|
latest
|
en
| 0.932744
|
# Unexpectedly uniformly continuous functions
The other day in a exam, I was given the following exercise:
Given $f : [0,1] \to \mathbb{R}$ continuous and such that $f(0) = 0, f(1) = 1$, let $g : \mathbb{R} \to \mathbb{R}$ be $g(x) = [x] + f(x - [x])$. Prove that $g$ is uniformly continuous. I'm looking for more examples of this kind of exercise to practice with (i.e. functions with uniform continuity that are not as straightforward to prove that they are). Any continuous function $f$ from a closed interval $[a, b]$ to $[a, b]$ is uniformly continuous. See https://en.wikipedia.org/wiki/Heine–Cantor_theorem. If $f(a) = a$ and $f(b) = b$ then extending $f$ to a function $\Bbb{R} \to \Bbb{R}$ by taking $f(x) = x$ for $x \not\in [a, b]$ still gives a uniformly continuous function.
|
• The theorem can be applied: you prove $g$ is continuous on $[0, 1]$ and conclude that it is uniformly continuous on $[0, 1]$.
|
https://math.stackexchange.com/questions/3991180/prove-that-1-frac1nk-1-frackn-frack2n2/3991214
| 1,696,163,170,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00064.warc.gz
| 411,005,490
| 36,255
|
# Prove that $(1+\frac{1}{n})^k < 1+ \frac{k}{n}+\frac{k^2}{n^2}$ [duplicate]
Prove that $$(1+\frac{1}{n})^k < 1+ \frac{k}{n}+\frac{k^2}{n^2}, \forall$$ n, k nonnegative integers, $$k\le n$$.
I know that has something to do with Bernoulli's inequality $$(1+\alpha)^x\ge 1+\alpha x, \alpha \ge -1, n\ge1$$.
If I reconsider Bernoulli's inequality with $$\alpha=\frac{1}{n}$$ and $$x=k$$ it follows that
$$(1+\frac{1}{n})^k\ge 1+\frac{k}{n}$$, but I don't know how to continue.
I also tried to prove it with induction where I consider $$p(n):(1+\frac{1}{n})^k < 1+ \frac{k}{n}+\frac{k^2}{n^2}$$ to be true and prove that $$p(n+1):(1+\frac{1}{n+1})^k < 1+ \frac{k}{n+1}+\frac{k^2}{(n+1)^2}$$ to be also true, but it didn't work.
Thank you!
Fix $$n\in\Bbb N$$. The claim is trivial in the case $$k=1$$, so by induction assume the claim is true for some $$k. Then we get: \begin{align*} \left(1+\frac{1}{n}\right)^{k+1}&<\left(1+\frac{k}{n}+\frac{k^2}{n^2}\right)\left(1+\frac{1}{n}\right)\\ &=1+\frac{k}{n}+\frac{k^2}{n^2}+\frac{1}{n}+\frac{k}{n^2}+\frac{k^2}{n^3}\\ &\overset{k which proves the claim the case $$k+1\leq n$$. (Actually we see from this proof that the statement is also correct for $$k=n+1$$.)
As a hint: exp(x) = 1 + x + x^2/2 + x^3/6 ... The right hand side is 1 + x + x^2, cutting the series short, but not dividing the quadratic term by 2. So for small k not dividing the quadratic term on the right side will make it larger for small k, but for large k adding the missing terms on the left side makes it larger. Try with n = 1 and k = 1, 2, 3, etc.
I’d write down all the terms for fixed n and a given k on the left side, and show that the higher powers are less than the extra k^2/2n^2 on the right hand side, as long as k isn’t too large.
Proof without induction/calculus:
It's easy to prove the inequality is true for $$k=1,2$$. If $$k\ge 3$$ we have
$$\left(1+\frac{1}{n}\right)^k - 1- \frac{k}{n}-\frac{k^2}{n^2}$$ $$= - \frac{k^2}{n^2} + \binom{k}{2} \frac{1}{n^2} + \sum_{i=3}^k \binom{k}{i} \frac{1}{n^i} = -\frac{k(k+1)}{2n^2}+\sum_{i=3}^k \binom{k}{i} \frac{1}{n^i} \tag1$$
Notice that $$\frac{\binom{k}{i+1}/n^{i+1}}{\binom{k}{i}/n^i}=\frac{k-i}{n(i+1)} \le \frac{k-3}{4n}, \forall i \ge 3$$
Then $$(1)$$ is less then $$-\frac{k(k+1)}{2n^2} + \frac{\binom{k}{3}/n^3}{1-\frac{k-3}{4n}}$$ and it suffices to prove $$\frac{\frac{k(k-1)(k-2)}{6n^3}}{1-\frac{k-3}{4n}} < \frac{k(k+1)}{2n^2}$$ Or equivalently $$\frac{4(k-1)(k-2)}{4n-k+3} < 3(k+1) \iff 4(k-1)(k-2) < 3(k+1)(4n-k+3)\\ \stackrel{k\le n}{\Leftarrow} 4(k-1)(k-2) < 3(k+1)(4k-k+3) = 9(k+1)^2$$ which is trivially true.
| 1,093
| 2,615
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.1875
| 4
|
CC-MAIN-2023-40
|
latest
|
en
| 0.77551
|
# Prove that $(1+\frac{1}{n})^k < 1+ \frac{k}{n}+\frac{k^2}{n^2}$ [duplicate]
Prove that $$(1+\frac{1}{n})^k < 1+ \frac{k}{n}+\frac{k^2}{n^2}, \forall$$ n, k nonnegative integers, $$k\le n$$. I know that has something to do with Bernoulli's inequality $$(1+\alpha)^x\ge 1+\alpha x, \alpha \ge -1, n\ge1$$. If I reconsider Bernoulli's inequality with $$\alpha=\frac{1}{n}$$ and $$x=k$$ it follows that
$$(1+\frac{1}{n})^k\ge 1+\frac{k}{n}$$, but I don't know how to continue. I also tried to prove it with induction where I consider $$p(n):(1+\frac{1}{n})^k < 1+ \frac{k}{n}+\frac{k^2}{n^2}$$ to be true and prove that $$p(n+1):(1+\frac{1}{n+1})^k < 1+ \frac{k}{n+1}+\frac{k^2}{(n+1)^2}$$ to be also true, but it didn't work. Thank you! Fix $$n\in\Bbb N$$. The claim is trivial in the case $$k=1$$, so by induction assume the claim is true for some $$k. Then we get: \begin{align*} \left(1+\frac{1}{n}\right)^{k+1}&<\left(1+\frac{k}{n}+\frac{k^2}{n^2}\right)\left(1+\frac{1}{n}\right)\\ &=1+\frac{k}{n}+\frac{k^2}{n^2}+\frac{1}{n}+\frac{k}{n^2}+\frac{k^2}{n^3}\\ &\overset{k which proves the claim the case $$k+1\leq n$$. (Actually we see from this proof that the statement is also correct for $$k=n+1$$.) As a hint: exp(x) = 1 + x + x^2/2 + x^3/6 ... The right hand side is 1 + x + x^2, cutting the series short, but not dividing the quadratic term by 2. So for small k not dividing the quadratic term on the right side will make it larger for small k, but for large k adding the missing terms on the left side makes it larger. Try with n = 1 and k = 1, 2, 3, etc. I’d write down all the terms for fixed n and a given k on the left side, and show that the higher powers are less than the extra k^2/2n^2 on the right hand side, as long as k isn’t too large. Proof without induction/calculus:
It's easy to prove the inequality is true for $$k=1,2$$.
|
If $$k\ge 3$$ we have
$$\left(1+\frac{1}{n}\right)^k - 1- \frac{k}{n}-\frac{k^2}{n^2}$$ $$= - \frac{k^2}{n^2} + \binom{k}{2} \frac{1}{n^2} + \sum_{i=3}^k \binom{k}{i} \frac{1}{n^i} = -\frac{k(k+1)}{2n^2}+\sum_{i=3}^k \binom{k}{i} \frac{1}{n^i} \tag1$$
Notice that $$\frac{\binom{k}{i+1}/n^{i+1}}{\binom{k}{i}/n^i}=\frac{k-i}{n(i+1)} \le \frac{k-3}{4n}, \forall i \ge 3$$
Then $$(1)$$ is less then $$-\frac{k(k+1)}{2n^2} + \frac{\binom{k}{3}/n^3}{1-\frac{k-3}{4n}}$$ and it suffices to prove $$\frac{\frac{k(k-1)(k-2)}{6n^3}}{1-\frac{k-3}{4n}} < \frac{k(k+1)}{2n^2}$$ Or equivalently $$\frac{4(k-1)(k-2)}{4n-k+3} < 3(k+1) \iff 4(k-1)(k-2) < 3(k+1)(4n-k+3)\\ \stackrel{k\le n}{\Leftarrow} 4(k-1)(k-2) < 3(k+1)(4k-k+3) = 9(k+1)^2$$ which is trivially true.
|
https://physics.stackexchange.com/questions/577926/a-confusion-in-string-connected-to-movable-pulley
| 1,656,677,379,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00546.warc.gz
| 505,620,599
| 66,309
|
# A confusion in string connected to movable pulley
** So as we can see the pulley attached to the body here is a movable pulley.In Illustration (A) if the block attached to the pulley is moved in the direction of the arrow with an displacement of X meters then we can state that the pulley attached to the object via string will also move with the block by a displacement of X meters.Now,if the pulley displaces by X meters then to keep the string around the pulley tight,the string has to move by 2x meters(for this problem assume the string movement is from bottom string to top string).Now coming to the second case i.e (B),here again everything is same as it was in (A),except the bottom string.In case (A) the bottom string was parallel to horizontal(I have forgot to draw it) where as in case (B) it is making and angle theta with the horizontal.
So my question is,in case (B)if the block will displace X meters along the direction of the arrow,the pulley will also move with it,so will this time the displacement of the strings(from bottom to top) to keep the string tight around the pulley be 2x meters.
• (a) This is a geometrical question rather than a physical. (b) Assuming that the top left hand end of the string is anchored, then X m of string moves upwards over the pulley in both cases. (c) But we really do need to know where the left hand ends of the string go. How, for example is angle $\theta$ maintained? Sep 6, 2020 at 14:57
• Yes sir,the angle is maintained.Also you can assume that the bottom end of the string is attached to a small block nd the top end is attached to a rigid wall,in both case Sep 6, 2020 at 15:07
• Are you assuming that the string is of fixed length, and that the 'small block' attached to the bottom end of the string is moveable? In that case you need to draw 'before and after' diagrams for the two cases. These should give you the answers you want. Sep 6, 2020 at 15:27
If the angle is maintained, then the object will move $$X(1 + \cos \theta)$$ in the direction of the string.
If the height is maintained, then the object will move $${X(1 + \sec \theta)}$$ horizontally, along the floor.
| 525
| 2,145
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.71875
| 4
|
CC-MAIN-2022-27
|
latest
|
en
| 0.918556
|
# A confusion in string connected to movable pulley
** So as we can see the pulley attached to the body here is a movable pulley.In Illustration (A) if the block attached to the pulley is moved in the direction of the arrow with an displacement of X meters then we can state that the pulley attached to the object via string will also move with the block by a displacement of X meters.Now,if the pulley displaces by X meters then to keep the string around the pulley tight,the string has to move by 2x meters(for this problem assume the string movement is from bottom string to top string).Now coming to the second case i.e (B),here again everything is same as it was in (A),except the bottom string.In case (A) the bottom string was parallel to horizontal(I have forgot to draw it) where as in case (B) it is making and angle theta with the horizontal. So my question is,in case (B)if the block will displace X meters along the direction of the arrow,the pulley will also move with it,so will this time the displacement of the strings(from bottom to top) to keep the string tight around the pulley be 2x meters. • (a) This is a geometrical question rather than a physical. (b) Assuming that the top left hand end of the string is anchored, then X m of string moves upwards over the pulley in both cases. (c) But we really do need to know where the left hand ends of the string go. How, for example is angle $\theta$ maintained? Sep 6, 2020 at 14:57
• Yes sir,the angle is maintained.Also you can assume that the bottom end of the string is attached to a small block nd the top end is attached to a rigid wall,in both case Sep 6, 2020 at 15:07
• Are you assuming that the string is of fixed length, and that the 'small block' attached to the bottom end of the string is moveable? In that case you need to draw 'before and after' diagrams for the two cases. These should give you the answers you want. Sep 6, 2020 at 15:27
If the angle is maintained, then the object will move $$X(1 + \cos \theta)$$ in the direction of the string.
|
If the height is maintained, then the object will move $${X(1 + \sec \theta)}$$ horizontally, along the floor.
|
https://math.stackexchange.com/questions/1987335/prove-inequality-sum-limits-nijkx-iy-jz-k-le-n2/2068148
| 1,713,825,656,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-18/segments/1712296818374.84/warc/CC-MAIN-20240422211055-20240423001055-00891.warc.gz
| 339,081,833
| 35,641
|
# Prove inequality$\sum\limits_{n|i+j+k}x_{i}y_{j}z_{k}\le n^2$
Being given an integer $n\ge 2$, and $x_{i},y_{i},z_{i}\in \mathbb{R}$ ($i=1,2,\cdots,n$) such that $$\sum_{i=1}^{n}(x^3_{i}+y^3_{i}+z^3_{i})=3n$$ show that $$\sum_{i+j+k=n}x_{i}y_{j}z_{k}\le n^2.$$
I know $a^3+b^3+c^3\ge 3abc$ if $a+b+c\ge 0$.
• Do you mean $x_i y_j z_k$? Or $x_i, y_i, z_i \ge 0$? Otherwise $x_i$ can be arbitrarily large and the statement is obviously wrong. Oct 27, 2016 at 8:51
• Do you have the constraint $x_i, y_i, z_i \ge 0$? Oct 27, 2016 at 9:02
• sorry, Now I have edit, it's $x_{i}y_{j}z_{k}$,and this are real numbers Oct 27, 2016 at 9:03
• Does the verticle bar in $\sum_{n|i+j+k}x_{i}y_{j}z_{k}\le n^2$ mean "=" ? Nov 1, 2016 at 18:02
• @communnites Yor headline asks for the sum with condition $n|i+j+k$, and the main question body asks for the sum with condition $n=i+j+k$. Please make this consistent. Nov 7, 2016 at 8:49
If you assume $x_i\geq 0$ start with $3x_i y_j z_k \leq x_i^3+ y_j^3+ z_k^3$.
Sum this over the triples $i,j,k$. You get $$3 \textrm{LHS}\leq \sum_i x_i^3\sum_{j+k=n-i} 1 + \textrm{sums for }y_i,z_i.$$
The key idea is the combinatorics here: counting how many times the term $x_i^3$ arises. The conclusion is that $$3 \textrm{LHS}\leq \sum_i (n-i-1)(x_i^3+ y_i^3+ z_i^3)$$.
| 571
| 1,298
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.859375
| 4
|
CC-MAIN-2024-18
|
latest
|
en
| 0.732562
|
# Prove inequality$\sum\limits_{n|i+j+k}x_{i}y_{j}z_{k}\le n^2$
Being given an integer $n\ge 2$, and $x_{i},y_{i},z_{i}\in \mathbb{R}$ ($i=1,2,\cdots,n$) such that $$\sum_{i=1}^{n}(x^3_{i}+y^3_{i}+z^3_{i})=3n$$ show that $$\sum_{i+j+k=n}x_{i}y_{j}z_{k}\le n^2.$$
I know $a^3+b^3+c^3\ge 3abc$ if $a+b+c\ge 0$. • Do you mean $x_i y_j z_k$? Or $x_i, y_i, z_i \ge 0$? Otherwise $x_i$ can be arbitrarily large and the statement is obviously wrong. Oct 27, 2016 at 8:51
• Do you have the constraint $x_i, y_i, z_i \ge 0$? Oct 27, 2016 at 9:02
• sorry, Now I have edit, it's $x_{i}y_{j}z_{k}$,and this are real numbers Oct 27, 2016 at 9:03
• Does the verticle bar in $\sum_{n|i+j+k}x_{i}y_{j}z_{k}\le n^2$ mean "=" ? Nov 1, 2016 at 18:02
• @communnites Yor headline asks for the sum with condition $n|i+j+k$, and the main question body asks for the sum with condition $n=i+j+k$. Please make this consistent. Nov 7, 2016 at 8:49
If you assume $x_i\geq 0$ start with $3x_i y_j z_k \leq x_i^3+ y_j^3+ z_k^3$. Sum this over the triples $i,j,k$.
|
You get $$3 \textrm{LHS}\leq \sum_i x_i^3\sum_{j+k=n-i} 1 + \textrm{sums for }y_i,z_i.$$
The key idea is the combinatorics here: counting how many times the term $x_i^3$ arises.
|
http://math.stackexchange.com/questions/81694/additive-inverse-of-a-nilpotent-element-is-nilpotent
| 1,469,640,407,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257826916.34/warc/CC-MAIN-20160723071026-00034-ip-10-185-27-174.ec2.internal.warc.gz
| 165,517,769
| 17,496
|
# Additive inverse of a nilpotent element is nilpotent
An element of a ring $R$ is nilpotent if $a^n=0$ for some $n \ge 1$.
How do I show that additive inverse of $a$ , $-a$ is also nilpotent?
The ring is commutative but may not have a unit element.
-
Using the distributive property, $ab+(-a)b=(a+(-a))b=0\cdot b=0$. Therefore, $$(-a)b=-(ab)\tag{1a}$$ Also, $ab+a(-b)=a(b+(-b))=a\cdot0=0$. Therefore, $$a(-b)=-(ab)\tag{1b}$$ Furthermore, since $a+(-a)=0$, we have $$-(-a)=a\tag{2}$$ Using $(1)$ and $(2)$, it is easy to show by induction that $$(-a)^k=\left\{\begin{array}{}a^k&\text{if }k\text{ is even}\\-(a^k)&\text{if }k\text{ is odd}\end{array}\right.\tag{3}$$ The fact that $a$ is nilpotent and $(3)$ shows that $-a$ is nilpotent.
-
I am simply filling in the details of Kb100's suggestion and including $-(-a)=a$, which is needed to show $(3)$. – robjohn Nov 13 '11 at 18:01
Try first proving that $a(-b)=(-a)b=-(ab)$ and then (by induction) that $(-a)^n$ is $a^n$ if $n$ is even and $-(a^n)$ if $n$ is odd.
-
| 388
| 1,025
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.15625
| 4
|
CC-MAIN-2016-30
|
latest
|
en
| 0.783596
|
# Additive inverse of a nilpotent element is nilpotent
An element of a ring $R$ is nilpotent if $a^n=0$ for some $n \ge 1$. How do I show that additive inverse of $a$ , $-a$ is also nilpotent? The ring is commutative but may not have a unit element. -
Using the distributive property, $ab+(-a)b=(a+(-a))b=0\cdot b=0$. Therefore, $$(-a)b=-(ab)\tag{1a}$$ Also, $ab+a(-b)=a(b+(-b))=a\cdot0=0$. Therefore, $$a(-b)=-(ab)\tag{1b}$$ Furthermore, since $a+(-a)=0$, we have $$-(-a)=a\tag{2}$$ Using $(1)$ and $(2)$, it is easy to show by induction that $$(-a)^k=\left\{\begin{array}{}a^k&\text{if }k\text{ is even}\\-(a^k)&\text{if }k\text{ is odd}\end{array}\right.\tag{3}$$ The fact that $a$ is nilpotent and $(3)$ shows that $-a$ is nilpotent. -
I am simply filling in the details of Kb100's suggestion and including $-(-a)=a$, which is needed to show $(3)$.
|
– robjohn Nov 13 '11 at 18:01
Try first proving that $a(-b)=(-a)b=-(ab)$ and then (by induction) that $(-a)^n$ is $a^n$ if $n$ is even and $-(a^n)$ if $n$ is odd.
|
http://math.stackexchange.com/questions/tagged/relations+homework
| 1,406,013,689,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2014-23/segments/1405997857710.17/warc/CC-MAIN-20140722025737-00039-ip-10-33-131-23.ec2.internal.warc.gz
| 243,622,899
| 23,133
|
# Tagged Questions
49 views
### Binary relation, reflexive, symmetric and transitive
I have a question regarding an image. I'm currently studying binary relations and the following image confused me: What got me confused is that the page from which I got the link ...
54 views
52 views
### Describing relations
(a). Describe all relations $R$ on $A$ which are simultaneously symmetric and antisymmetric. (b). Describe all relations $R$ on $A$ which are reflexive, symmetric, and antisymmetric. I have no ...
27 views
### Why are both of these not equivalence relations?
Can anyone tell me why the first set is an equivalence relation, and not the second? As far as I can see, both are reflexive, symmetric and transitive, but my books says only the second one is an ...
54 views
### Question on increasing/decreasing subsequences?
Here's the question: Describe a sequence consisting from 1 to 10,000 in some order so that there is no increasing or decreasing subsequence of size 101. I'm not quite sure how to do this. My first ...
85 views
### Is this poset a lattice?
Given the set $\Bbb Z^+\times\Bbb Z^+$ and the relation \begin{align*} (x_1,x_2)\,R\,(y_1,y_2)\iff &(x_1+x_2 < y_1 + y_2)\\ &\text{ OR }(x_1 + x_2 = y_1 + y_2\text{ AND }x_1 \le y_1)\;: ...
121 views
### Symmetric relations
Let $A=\{ 1, 2, 3, 4 \}$ is $B = \{ (1, 2), (2, 1), (1, 3), (3, 1) \}$ Is $B$ a symmetric relation on $A$? I said no because not all $x, y \in A$ are in $B$ Is this correct?
135 views
### Prove the relation to be a Linear Order.
Let (a, b),(x, y) ∈ R × R and define ≺ as follows: (a, b) ≺ (c, d) iff a < c or a = c and b < d: Define (a, b) ≼ (c, d) if and only if (a, b) = (c, d) or (a, b) ≺ (c, d). Show that ≼ is a ...
450 views
### How to solve recurrence relation: f(n) = f(n-1) + 2(n-1) when f(1) = 1?
I am just learning about recurrence relations, and this is an absolute beginner's question. I understand what's going on in the formula, but I have no clue how to write it's solution? This probably ...
78 views
75 views
### $\beta$ as the relation “is a brother of”
So I have a question about relations. In particular, here is the formal question: Let $\beta$ be the relation "is a brother of" and let $\sigma$ be the relation "is a sister of". Describe ...
66 views
### Abstract Algebra topic: Equivalence relations [duplicate]
If R1 is reflective and not transitive, R2 is transitive but not symmetric and R3 is symmetric but not reflexive. We need to find an example of a set S and the three relations R1 R2 R3.
109 views
### relations - examples and counterexamples
The question is to find an example of a set $S$ and three relations $R_1$, $R_2$, and $R_3$ on it, such that $R_1$ is reflexive but not transitive, $R_2$ is transitive but not symmetric and $R_3$ is ...
let say I have $A=\{1,\dots,8\}$ I want to know the following things: what the number of relations on $A$? what the number of reflexivity relations on $A$? what the number of equivalence relations ...
### Partially ordered set Question : $A=\{1,2,3,4,5,6\}$ ,$R =\mathcal P(A) \times \mathcal P(A)$
I`m trying to prove that this relation is partially ordered set: $A=\{1,2,3,4,5,6\}$ $R =\mathcal P(A) \times \mathcal P(A)$ \$(B,C)R(D,E) \Longleftrightarrow (B \subset D) \vee ((B=D)\wedge(C ...
| 1,010
| 3,291
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.875
| 4
|
CC-MAIN-2014-23
|
latest
|
en
| 0.916261
|
# Tagged Questions
49 views
### Binary relation, reflexive, symmetric and transitive
I have a question regarding an image. I'm currently studying binary relations and the following image confused me: What got me confused is that the page from which I got the link ...
54 views
52 views
### Describing relations
(a). Describe all relations $R$ on $A$ which are simultaneously symmetric and antisymmetric. (b). Describe all relations $R$ on $A$ which are reflexive, symmetric, and antisymmetric. I have no ...
27 views
### Why are both of these not equivalence relations? Can anyone tell me why the first set is an equivalence relation, and not the second? As far as I can see, both are reflexive, symmetric and transitive, but my books says only the second one is an ...
54 views
### Question on increasing/decreasing subsequences? Here's the question: Describe a sequence consisting from 1 to 10,000 in some order so that there is no increasing or decreasing subsequence of size 101. I'm not quite sure how to do this. My first ...
85 views
### Is this poset a lattice? Given the set $\Bbb Z^+\times\Bbb Z^+$ and the relation \begin{align*} (x_1,x_2)\,R\,(y_1,y_2)\iff &(x_1+x_2 < y_1 + y_2)\\ &\text{ OR }(x_1 + x_2 = y_1 + y_2\text{ AND }x_1 \le y_1)\;: ...
121 views
### Symmetric relations
Let $A=\{ 1, 2, 3, 4 \}$ is $B = \{ (1, 2), (2, 1), (1, 3), (3, 1) \}$ Is $B$ a symmetric relation on $A$? I said no because not all $x, y \in A$ are in $B$ Is this correct? 135 views
### Prove the relation to be a Linear Order. Let (a, b),(x, y) ∈ R × R and define ≺ as follows: (a, b) ≺ (c, d) iff a < c or a = c and b < d: Define (a, b) ≼ (c, d) if and only if (a, b) = (c, d) or (a, b) ≺ (c, d). Show that ≼ is a ...
450 views
### How to solve recurrence relation: f(n) = f(n-1) + 2(n-1) when f(1) = 1? I am just learning about recurrence relations, and this is an absolute beginner's question. I understand what's going on in the formula, but I have no clue how to write it's solution? This probably ...
78 views
75 views
### $\beta$ as the relation “is a brother of”
So I have a question about relations. In particular, here is the formal question: Let $\beta$ be the relation "is a brother of" and let $\sigma$ be the relation "is a sister of". Describe ...
66 views
### Abstract Algebra topic: Equivalence relations [duplicate]
If R1 is reflective and not transitive, R2 is transitive but not symmetric and R3 is symmetric but not reflexive. We need to find an example of a set S and the three relations R1 R2 R3. 109 views
### relations - examples and counterexamples
The question is to find an example of a set $S$ and three relations $R_1$, $R_2$, and $R_3$ on it, such that $R_1$ is reflexive but not transitive, $R_2$ is transitive but not symmetric and $R_3$ is ...
let say I have $A=\{1,\dots,8\}$ I want to know the following things: what the number of relations on $A$? what the number of reflexivity relations on $A$?
|
what the number of equivalence relations ...
### Partially ordered set Question : $A=\{1,2,3,4,5,6\}$ ,$R =\mathcal P(A) \times \mathcal P(A)$
I`m trying to prove that this relation is partially ordered set: $A=\{1,2,3,4,5,6\}$ $R =\mathcal P(A) \times \mathcal P(A)$ \$(B,C)R(D,E) \Longleftrightarrow (B \subset D) \vee ((B=D)\wedge(C ...
|
https://math.stackexchange.com/questions/2099745/the-moore-penrose-inverse-of-1-ba-when-1-ab-is-moore-penrose-invertible
| 1,563,844,836,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-30/segments/1563195528635.94/warc/CC-MAIN-20190723002417-20190723024417-00523.warc.gz
| 471,012,726
| 35,702
|
# the Moore-Penrose inverse of $1-ba$ when $1-ab$ is Moore-Penrose invertible
If $a,b$ are elements of a unital algebra $A$, then is there a proposition that states $1-ab$ is Moore-Penrose invertible if and only if $1-ba$ is Moore-Penrose invertible? If yes, what is the Moore-Penrose inverse of $1-ba$? How can I prove it?
• You may want to take a look at this – polfosol Jan 16 '17 at 6:44
• My guess would be that $1+b(1-ab)^\dagger a=(1+ba)^\dagger$, based on power series expansions of the geometric series. However, I am only able to prove that $(1-ba)\left(1+b(1-ab)^\dagger a\right)(1-ba)=1-ba$. The rest just got messy and I don't arrive anywhere. – Josué Tonelli-Cueto Jan 16 '17 at 10:33
• I think you mean $1+b(1-ab)^\dagger a=(1-b a)^\dagger$. – shima homayouni Jan 16 '17 at 10:50
Suppose that $c$ is the Moore-Penrose pseudoinverse of $1-ab$. By definition, this means that $$(1-ab)c(1-ab)=1-ab\qquad (1),$$ together with three other identities of a similar flavor.
I claim that $1+bca$ is the Moore-Penrose pseudoinverse of $1-ba$. Indeed, each of the four identities satisfied by $c$ implies the analogous identity with $1+bca$ in place of $c$ and $1-ba$ in place of $1-ab$. For example, expanding (1) gives $$c-abc-cab+abcab=1-ab.$$ Multiplying on the left by $b$ and on the right by $a$ then gives that $$bca-babca-bcaba+babcaba=ba-baba\qquad (2).$$ Expand the following expression: $$(1-ba)(1+bca)(1-ba)=1-2ba+baba + bca-babca-bcaba+babcaba.$$ By (2), this equals $1-2ba+baba+ba-baba$, which simplifies to $1-ba$, as desired. This is the first of the four equalities needed to verify that $1+bca$ is the Moore-Penrose pseudoinverse of $1-ba$. Similar calculations establish the other three.
• How can I prove the second equality? Should I multiply on left by $b$ and on the right by$a$? – shima homayouni Jan 26 '17 at 17:03
| 608
| 1,847
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.859375
| 4
|
CC-MAIN-2019-30
|
latest
|
en
| 0.838587
|
# the Moore-Penrose inverse of $1-ba$ when $1-ab$ is Moore-Penrose invertible
If $a,b$ are elements of a unital algebra $A$, then is there a proposition that states $1-ab$ is Moore-Penrose invertible if and only if $1-ba$ is Moore-Penrose invertible? If yes, what is the Moore-Penrose inverse of $1-ba$? How can I prove it? • You may want to take a look at this – polfosol Jan 16 '17 at 6:44
• My guess would be that $1+b(1-ab)^\dagger a=(1+ba)^\dagger$, based on power series expansions of the geometric series. However, I am only able to prove that $(1-ba)\left(1+b(1-ab)^\dagger a\right)(1-ba)=1-ba$. The rest just got messy and I don't arrive anywhere. – Josué Tonelli-Cueto Jan 16 '17 at 10:33
• I think you mean $1+b(1-ab)^\dagger a=(1-b a)^\dagger$. – shima homayouni Jan 16 '17 at 10:50
Suppose that $c$ is the Moore-Penrose pseudoinverse of $1-ab$. By definition, this means that $$(1-ab)c(1-ab)=1-ab\qquad (1),$$ together with three other identities of a similar flavor. I claim that $1+bca$ is the Moore-Penrose pseudoinverse of $1-ba$. Indeed, each of the four identities satisfied by $c$ implies the analogous identity with $1+bca$ in place of $c$ and $1-ba$ in place of $1-ab$. For example, expanding (1) gives $$c-abc-cab+abcab=1-ab.$$ Multiplying on the left by $b$ and on the right by $a$ then gives that $$bca-babca-bcaba+babcaba=ba-baba\qquad (2).$$ Expand the following expression: $$(1-ba)(1+bca)(1-ba)=1-2ba+baba + bca-babca-bcaba+babcaba.$$ By (2), this equals $1-2ba+baba+ba-baba$, which simplifies to $1-ba$, as desired.
|
This is the first of the four equalities needed to verify that $1+bca$ is the Moore-Penrose pseudoinverse of $1-ba$.
|
https://physics.stackexchange.com/questions/229827/air-drag-on-a-vertically-thrown-object
| 1,618,624,801,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00255.warc.gz
| 534,107,569
| 38,136
|
# Air drag on a vertically thrown object
Air drag on a vertically thrown object is given directly propotional to its the square of its instantaneous velocity. But shouldn't it be linearly associated as air drag depends on amount of air displaced which is directly propotional to velocity?
• The linear law applies for slowly moving objects, the square law for fast moving ones, but the details are rather complicated: en.wikipedia.org/wiki/Drag_(physics). There is, unfortunately, no simple explanation or formula for drag. – CuriousOne Jan 16 '16 at 9:07
Suppose the stone has some cross sectional area $A$. If it's travelling at a velocity $v$ then in one second it sweeps out a volume $Av$. Therefore the mass of the air is displaces is:
$$m_\text{air} = \rho Av$$
where $\rho$ is the density of the air.
For the next step we assume that the stone accelerates the air to match its own velocity so the change in the momentum of the air per second is:
$$\Delta p_\text{air} = m_\text{air}v = \rho Av^2$$
But the rate of change in the momentum is just the force, so we end up with:
$$F = \rho Av^2$$
This is an excessively simple calculation because in practice a moving object doesn't accelerate all the air it meets to match its own velocity. However it gives you a feel for where the $v^2$ term comes from.
You can get an intuitive sense for why there's a $v^2$ dependence by the following "toy model" analogy. Consider a large object, call it $M$, moving through absolutely "still air" comprised of a lot of "small marbles", each of the same mass $m$. The marbles are stationary (that's our "still air") and our object $M$ is moving through them with velocity $v$. So, each marble that $M$ collides with imparts a "retarding" momentum $mv$ to $M$ (actually, because $M\gg m$, each marble collision (assuming elastic collisions) really imparts $2mv$ to $M$, but that's a complication we can ignore for our purposes -- I only mention it to forestall comments).
So, if $M$ collides with $n$ such marbles in a second, then $M$'s momentum will be reduced by $n\times mv$. And that's linear in $v$. But now suppose you double $M$'s velocity. Not only will you double the effect of each collision ($2mv$ rather than $mv$), as above, but you'll double the number of marbles $M$ collides with each second ($2n$ rather than just $n$). So that's a factor of $2\times2=4$, and as I imagine you can see, it's $v^2$ (rather than just $v$) in general.
| 629
| 2,452
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.15625
| 4
|
CC-MAIN-2021-17
|
latest
|
en
| 0.897227
|
# Air drag on a vertically thrown object
Air drag on a vertically thrown object is given directly propotional to its the square of its instantaneous velocity. But shouldn't it be linearly associated as air drag depends on amount of air displaced which is directly propotional to velocity? • The linear law applies for slowly moving objects, the square law for fast moving ones, but the details are rather complicated: en.wikipedia.org/wiki/Drag_(physics). There is, unfortunately, no simple explanation or formula for drag. – CuriousOne Jan 16 '16 at 9:07
Suppose the stone has some cross sectional area $A$. If it's travelling at a velocity $v$ then in one second it sweeps out a volume $Av$. Therefore the mass of the air is displaces is:
$$m_\text{air} = \rho Av$$
where $\rho$ is the density of the air. For the next step we assume that the stone accelerates the air to match its own velocity so the change in the momentum of the air per second is:
$$\Delta p_\text{air} = m_\text{air}v = \rho Av^2$$
But the rate of change in the momentum is just the force, so we end up with:
$$F = \rho Av^2$$
This is an excessively simple calculation because in practice a moving object doesn't accelerate all the air it meets to match its own velocity. However it gives you a feel for where the $v^2$ term comes from. You can get an intuitive sense for why there's a $v^2$ dependence by the following "toy model" analogy. Consider a large object, call it $M$, moving through absolutely "still air" comprised of a lot of "small marbles", each of the same mass $m$. The marbles are stationary (that's our "still air") and our object $M$ is moving through them with velocity $v$. So, each marble that $M$ collides with imparts a "retarding" momentum $mv$ to $M$ (actually, because $M\gg m$, each marble collision (assuming elastic collisions) really imparts $2mv$ to $M$, but that's a complication we can ignore for our purposes -- I only mention it to forestall comments). So, if $M$ collides with $n$ such marbles in a second, then $M$'s momentum will be reduced by $n\times mv$. And that's linear in $v$. But now suppose you double $M$'s velocity. Not only will you double the effect of each collision ($2mv$ rather than $mv$), as above, but you'll double the number of marbles $M$ collides with each second ($2n$ rather than just $n$).
|
So that's a factor of $2\times2=4$, and as I imagine you can see, it's $v^2$ (rather than just $v$) in general.
|
https://engineering.stackexchange.com/questions/40847/to-stop-or-to-go-around
| 1,627,055,776,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00451.warc.gz
| 253,039,114
| 37,995
|
# To stop or to go around?
Let us we are moving in a car. There is a wall in front of us, we need to decide can we go around it or not. It is known that width of the wall is $$w$$ and our speed is $$v = const$$.
What is a simple approach to set formally conditions upon which it is safe to go around?
I understand that there are many possible details such as traction, weight of car etc. and will be glad even for the most simplistic analysis. If you can provide a source, that too is great. Thanks.
• compute max turn radius, as function of w and distance. compute centripetal force as function of that radius and v. compare that force to max lateral force on tires before it slips – Pete W Mar 8 at 17:29
• If you go around it, you will still have a car. – StainlessSteelRat Mar 8 at 18:30
• the calculations for how quickly a car can turn are really too complex unless all you want is a general idea. This would be best accomplished by measuring the car's turning performance empirically. – Tiger Guy Mar 8 at 19:19
Each car depending on its handling has a maximum safe turning speed $$v_{max}$$, and radius $$r$$. Let us say the current speed $$v, then your turning angular velocity is $$\omega= v/r.$$ We need to go an arc of $$\pi/2$$ so the time it takes to turn is $$t=\pi/2\omega= \pi r/2v$$ which will give the decision distance $$x$$ as $$x=t\cdot v$$.
Above was for a wall wider than the cars cornering turn, if it is less, then the arc is smaller and we have $$\theta= arccos(r-\text{car-width})$$.
| 397
| 1,516
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.859375
| 4
|
CC-MAIN-2021-31
|
latest
|
en
| 0.943186
|
# To stop or to go around? Let us we are moving in a car. There is a wall in front of us, we need to decide can we go around it or not. It is known that width of the wall is $$w$$ and our speed is $$v = const$$. What is a simple approach to set formally conditions upon which it is safe to go around? I understand that there are many possible details such as traction, weight of car etc. and will be glad even for the most simplistic analysis. If you can provide a source, that too is great. Thanks. • compute max turn radius, as function of w and distance. compute centripetal force as function of that radius and v. compare that force to max lateral force on tires before it slips – Pete W Mar 8 at 17:29
• If you go around it, you will still have a car. – StainlessSteelRat Mar 8 at 18:30
• the calculations for how quickly a car can turn are really too complex unless all you want is a general idea. This would be best accomplished by measuring the car's turning performance empirically. – Tiger Guy Mar 8 at 19:19
Each car depending on its handling has a maximum safe turning speed $$v_{max}$$, and radius $$r$$. Let us say the current speed $$v, then your turning angular velocity is $$\omega= v/r.$$ We need to go an arc of $$\pi/2$$ so the time it takes to turn is $$t=\pi/2\omega= \pi r/2v$$ which will give the decision distance $$x$$ as $$x=t\cdot v$$.
|
Above was for a wall wider than the cars cornering turn, if it is less, then the arc is smaller and we have $$\theta= arccos(r-\text{car-width})$$.
|
https://math.stackexchange.com/questions/4525144/when-does-a-limit-function-not-exist
| 1,695,970,948,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-40/segments/1695233510498.88/warc/CC-MAIN-20230929054611-20230929084611-00626.warc.gz
| 414,492,217
| 34,325
|
# When does a limit function not exist??
$$f(x) \begin{cases} \frac{3x-6}{x^4-4}, 0 < x < 2 \\ 0, x=2 \\ \frac{x-2}{\sqrt{3-x} -1} 2 < x < 3 \end{cases}$$
For this expression, can I say that these 3 cases, the limit does not exist because when $$x=2$$ or as $$x$$ approaches $$2$$ from left and right , all of the 3 functions have different values (when I substitute $$x=2$$) thus the limit does not have a finite and unique value. Is that right to say that? If not, when does a limit not exist?
1. $$\lim_{x \to 2^+}$$
2. $$\lim_{x \to 2^-}$$
3. $$\lim_{x \to 2}$$
The value of $$f(x)$$ when $$x=2$$ has nothing to do with the limit. The left hand limit ($$\frac 3{32})$$ and the right hand limit ($$-2$$) are different and that is enough to say that the limit does not exist.
1.$$\lim_{x \to 2^+} f(x) = -2$$ Use LHopital
2.$$\lim_{x \to 2^-} f(x) = \frac{3}{32}$$ Use LHopital
Since $$-2 \not = \frac{3}{32}$$
| 339
| 918
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.03125
| 4
|
CC-MAIN-2023-40
|
latest
|
en
| 0.698802
|
# When does a limit function not exist?? $$f(x) \begin{cases} \frac{3x-6}{x^4-4}, 0 < x < 2 \\ 0, x=2 \\ \frac{x-2}{\sqrt{3-x} -1} 2 < x < 3 \end{cases}$$
For this expression, can I say that these 3 cases, the limit does not exist because when $$x=2$$ or as $$x$$ approaches $$2$$ from left and right , all of the 3 functions have different values (when I substitute $$x=2$$) thus the limit does not have a finite and unique value. Is that right to say that? If not, when does a limit not exist? 1. $$\lim_{x \to 2^+}$$
2. $$\lim_{x \to 2^-}$$
3. $$\lim_{x \to 2}$$
The value of $$f(x)$$ when $$x=2$$ has nothing to do with the limit. The left hand limit ($$\frac 3{32})$$ and the right hand limit ($$-2$$) are different and that is enough to say that the limit does not exist.
|
1.$$\lim_{x \to 2^+} f(x) = -2$$ Use LHopital
2.$$\lim_{x \to 2^-} f(x) = \frac{3}{32}$$ Use LHopital
Since $$-2 \not = \frac{3}{32}$$
|
https://math.stackexchange.com/questions/2497642/whats-the-derivation-of-this-integral-formula
| 1,708,962,394,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-10/segments/1707947474660.32/warc/CC-MAIN-20240226130305-20240226160305-00338.warc.gz
| 386,088,743
| 35,815
|
# What's the derivation of this integral formula?
I was searching around the web for some information about integrals and I came across the formula:
$$\int_{-\infty}^\infty \frac{\ln(x^2)e^{\frac{-x^2}{2\sigma}}}{(2\pi)^\frac{1}{2}\sigma}dx= \ln(\sigma^2)-\gamma-\ln(2)$$
$\gamma =$ the Euler-Mascheroni Constant
I'm very unsure where the Euler-Mascheroni constant came from. I tried rearranging the integral to simpler terms but I end up getting:
$$\int_{-\infty}^\infty \ln|x|e^{-x^2}dx$$
which isn't overtly integrable. Where does this formula come from?
• Your rearrangement is problematical, since $\log x$ is not defined for $x\le0$. Oct 31, 2017 at 4:02
• If you used the fact that $\ln(x^2) = 2 \ln x$ while simplifying, then note that for $x$ negative, it must change to $2 \ln |x|$. Oct 31, 2017 at 4:04
• Are you very sure the exponent should not be $-x^2/(2\sigma^2)\text{ ?} \qquad$ Oct 31, 2017 at 4:22
• If we assume that this was supposed to say $$\int_{-\infty}^\infty \frac{\ln(x^2)e^{-x^2/(2\sigma^2)}}{(2\pi)^{1/2} \sigma}dx= \ln(\sigma^2)-\gamma-\ln(2),$$ then in probabilistic language, it says that if $X\sim N(0,\sigma^2),$ i.e. $X$ is normally distributed with expected value $0$ and standard deviation $\sigma,$ then $$\operatorname{E}(2\log|X|) = 2\log\sigma - \gamma-\log 2.$$ Oct 31, 2017 at 4:26
• \begin{align} \int_{-\infty}^\infty \frac{\ln(x^2)e^{-x^2/(2\sigma^2)}}{(2\pi)^{1/2} \sigma}dx & = 2\log\sigma - \int_{-\infty}^\infty 2\log|(x/\sigma)| \frac{e^{(-1/2)(x/\sigma)^2}}{\sqrt{2\pi}} \, \frac{dx} \sigma \\ \\ & = 2\log \sigma - \int_{-\infty}^\infty 2\log|u| \frac{e^{-(1/2)u^2}}{\sqrt{2\pi}} \,du. \end{align} Thus it is easily seen that the value of the integral is $2\log\sigma$ plus something not depending on $\sigma. \qquad$ Oct 31, 2017 at 4:36
Throw out unecessary constants and since the integrand is even, the problem reduces to evaluate: $$I=\int_0^\infty \ln(x^2) e^{-ax^2} dx$$
Enforcing the subsitution $x=(u/a)^{1/2}$ gives $$I = \frac{1}{2\sqrt{a}} \left[\int_0^\infty u^{-1/2} e^{-u}\ln u \, du - \ln a \int_0^\infty u^{-1/2}e^{-u}\ du \right]$$
The first integral is just $\Gamma'(1/2)=\sqrt{\pi}(-\gamma-2\ln 2)$, the second integral is $\Gamma(1/2)=\sqrt{\pi}$.
• I tried doing so research, but do you know how to prove $\frac{d}{dx}[\Gamma(1/2)] = (\pi)^\frac{1}{2}(-\gamma-2\ln(2))$ Or a link to somewhere with the proof? Oct 31, 2017 at 16:35
• The simplest way is possibly via the digamma function $\psi(z) = \Gamma'(z)/\Gamma(z)$. The digamma function satisfies $$\psi(1+z) = -\gamma + \int_0^1 \frac{1-x^z}{1-x} dx$$ from which you can calculate $\psi(1/2)$ easily. You can learn more about these functions by consulting relevant books, online resources sometimes only list formula but no proofs. Oct 31, 2017 at 17:46
| 1,030
| 2,793
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.0625
| 4
|
CC-MAIN-2024-10
|
latest
|
en
| 0.756308
|
# What's the derivation of this integral formula? I was searching around the web for some information about integrals and I came across the formula:
$$\int_{-\infty}^\infty \frac{\ln(x^2)e^{\frac{-x^2}{2\sigma}}}{(2\pi)^\frac{1}{2}\sigma}dx= \ln(\sigma^2)-\gamma-\ln(2)$$
$\gamma =$ the Euler-Mascheroni Constant
I'm very unsure where the Euler-Mascheroni constant came from. I tried rearranging the integral to simpler terms but I end up getting:
$$\int_{-\infty}^\infty \ln|x|e^{-x^2}dx$$
which isn't overtly integrable. Where does this formula come from? • Your rearrangement is problematical, since $\log x$ is not defined for $x\le0$. Oct 31, 2017 at 4:02
• If you used the fact that $\ln(x^2) = 2 \ln x$ while simplifying, then note that for $x$ negative, it must change to $2 \ln |x|$. Oct 31, 2017 at 4:04
• Are you very sure the exponent should not be $-x^2/(2\sigma^2)\text{ ?} \qquad$ Oct 31, 2017 at 4:22
• If we assume that this was supposed to say $$\int_{-\infty}^\infty \frac{\ln(x^2)e^{-x^2/(2\sigma^2)}}{(2\pi)^{1/2} \sigma}dx= \ln(\sigma^2)-\gamma-\ln(2),$$ then in probabilistic language, it says that if $X\sim N(0,\sigma^2),$ i.e. $X$ is normally distributed with expected value $0$ and standard deviation $\sigma,$ then $$\operatorname{E}(2\log|X|) = 2\log\sigma - \gamma-\log 2.$$ Oct 31, 2017 at 4:26
• \begin{align} \int_{-\infty}^\infty \frac{\ln(x^2)e^{-x^2/(2\sigma^2)}}{(2\pi)^{1/2} \sigma}dx & = 2\log\sigma - \int_{-\infty}^\infty 2\log|(x/\sigma)| \frac{e^{(-1/2)(x/\sigma)^2}}{\sqrt{2\pi}} \, \frac{dx} \sigma \\ \\ & = 2\log \sigma - \int_{-\infty}^\infty 2\log|u| \frac{e^{-(1/2)u^2}}{\sqrt{2\pi}} \,du. \end{align} Thus it is easily seen that the value of the integral is $2\log\sigma$ plus something not depending on $\sigma. \qquad$ Oct 31, 2017 at 4:36
Throw out unecessary constants and since the integrand is even, the problem reduces to evaluate: $$I=\int_0^\infty \ln(x^2) e^{-ax^2} dx$$
Enforcing the subsitution $x=(u/a)^{1/2}$ gives $$I = \frac{1}{2\sqrt{a}} \left[\int_0^\infty u^{-1/2} e^{-u}\ln u \, du - \ln a \int_0^\infty u^{-1/2}e^{-u}\ du \right]$$
The first integral is just $\Gamma'(1/2)=\sqrt{\pi}(-\gamma-2\ln 2)$, the second integral is $\Gamma(1/2)=\sqrt{\pi}$. • I tried doing so research, but do you know how to prove $\frac{d}{dx}[\Gamma(1/2)] = (\pi)^\frac{1}{2}(-\gamma-2\ln(2))$ Or a link to somewhere with the proof? Oct 31, 2017 at 16:35
• The simplest way is possibly via the digamma function $\psi(z) = \Gamma'(z)/\Gamma(z)$.
|
The digamma function satisfies $$\psi(1+z) = -\gamma + \int_0^1 \frac{1-x^z}{1-x} dx$$ from which you can calculate $\psi(1/2)$ easily.
|
https://math.stackexchange.com/questions/1599843/pseudorandom-number-generator-using-uniform-random-variable
| 1,713,050,192,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-18/segments/1712296816853.44/warc/CC-MAIN-20240413211215-20240414001215-00261.warc.gz
| 362,996,706
| 37,687
|
# Pseudorandom Number Generator Using Uniform Random Variable
I am working out of Mathematical Statistics and Data Analysis by John Rice and ran into the following interesting problem I'm having trouble figuring out.
Ch 2 (#65)
How could random variables with the following density function be generated from a uniform random number generator?
$$f(x) = \frac{1 + \alpha x}{2}, \quad -1 \leq x \leq 1,\quad -1 \leq \alpha \leq 1$$
So I believe I'm suppose to use the following fact to solve the problem
Proposition D
Let U be uniform on [0, 1], and let X = $$F^{-1}$$(U). Then the cdf of X is F.
Proof
$$P(X \leq x) = P(F^{-1}(U) \leq x) = P(U \leq F(x)) = F(x)$$
That is, we can use uniform random variables to generate other random variables that will have cdf F
So my goal should then be to find a cdf and it's inverse then give as input to the inverse the uniform random variable. I've included my attempt.
Given $$f(x) = \frac{1 + \alpha x}{2}$$
$$F(X) = \int_{-1}^{x} \frac{1 + \alpha t}{2} dt \; = \; \frac{x}{2} + \frac{\alpha x}{4} + \frac{1}{2} - \frac{\alpha}{4}$$
$$4 \cdot F(X) - 2 + \alpha = 2x + \alpha x$$
$$F^{-1}(X) = \frac{4X - 2 + \alpha}{2 + \alpha}$$
So our random variable is, for example, T where
$$T = F^{-1}(U) = \frac{4U - 2 + \alpha}{2 + \alpha}$$
The answer in the back of the book is
$$X = [-1 + 2 \sqrt{1/4 - \alpha(1/2 - \alpha / 4 - U)}]/ \alpha$$
I'm not really sure where I went wrong. Any help?
• I think I may have spotted a problem with my integration, I'll continue to try and work it out, but any other input is still welcome Jan 4, 2016 at 17:17
• Yes, the antiderivative should have a $t^2$ term. Jan 4, 2016 at 17:19
• Yes that was it, I worked it out. The devil's always in the details I guess Jan 4, 2016 at 17:32
• Slips of this kind are universal. The unfortunate thing is that sometimes they lead students who understand something perfectly well to doubt their understanding. Jan 4, 2016 at 17:36
• This is known as Inverse Transform Sampling. Its a good technique to generate random numbers from a given density, however its naive in a sense that the CDF must be calculated (which is not always possible or is too hard). The proof to why this works is here:en.wikipedia.org/wiki/Inverse_transform_sampling You should look into rejection sampling techniques to see an alternative way to generate random numbers. Jan 4, 2016 at 23:37
The cdf appears to be wrong. When $-1\leq x\leq 1$, \begin{align*} F_X(x) &= \int_{-1}^{x} \frac{1 + \alpha t}{2} dt\\ &=\int_{-1}^x \frac{1}{2}+\frac{\alpha}{2}t\,dt\\ &=\frac{1}{2}[x+1]+\frac{\alpha}{4}[x^2-1]\\ \end{align*} Other than that, your approach seems fine.
• Thanks for your help! After completing the square I was able to solve it. I would upvote your answer, but I don't have enough reputation yet. Jan 4, 2016 at 17:33
• @ApprenticeOfMathematics Don't worry. It was pointed out in the comments while I was writing the answer, but I didn't see it. I'm glad you got it.
– Em.
Jan 4, 2016 at 17:37
Comment: Demonstration in R with $\alpha = .2$ of answerbook result.
alpha = .2; m = 10^5; u = runif(m)
x = (-1 + 2*sqrt(1/4 + alpha*(1/2 + alpha/4 - u)))/alpha
hist(x, col="wheat", prob=T)
curve((1 + alpha*x)/2, -1, 1, lwd=2, col="blue", add=T)
• Very nice! It's awesome seeing simulations supporting the textbook result Jan 4, 2016 at 18:48
| 1,081
| 3,363
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.25
| 4
|
CC-MAIN-2024-18
|
latest
|
en
| 0.897132
|
# Pseudorandom Number Generator Using Uniform Random Variable
I am working out of Mathematical Statistics and Data Analysis by John Rice and ran into the following interesting problem I'm having trouble figuring out. Ch 2 (#65)
How could random variables with the following density function be generated from a uniform random number generator? $$f(x) = \frac{1 + \alpha x}{2}, \quad -1 \leq x \leq 1,\quad -1 \leq \alpha \leq 1$$
So I believe I'm suppose to use the following fact to solve the problem
Proposition D
Let U be uniform on [0, 1], and let X = $$F^{-1}$$(U). Then the cdf of X is F.
Proof
$$P(X \leq x) = P(F^{-1}(U) \leq x) = P(U \leq F(x)) = F(x)$$
That is, we can use uniform random variables to generate other random variables that will have cdf F
So my goal should then be to find a cdf and it's inverse then give as input to the inverse the uniform random variable. I've included my attempt. Given $$f(x) = \frac{1 + \alpha x}{2}$$
$$F(X) = \int_{-1}^{x} \frac{1 + \alpha t}{2} dt \; = \; \frac{x}{2} + \frac{\alpha x}{4} + \frac{1}{2} - \frac{\alpha}{4}$$
$$4 \cdot F(X) - 2 + \alpha = 2x + \alpha x$$
$$F^{-1}(X) = \frac{4X - 2 + \alpha}{2 + \alpha}$$
So our random variable is, for example, T where
$$T = F^{-1}(U) = \frac{4U - 2 + \alpha}{2 + \alpha}$$
The answer in the back of the book is
$$X = [-1 + 2 \sqrt{1/4 - \alpha(1/2 - \alpha / 4 - U)}]/ \alpha$$
I'm not really sure where I went wrong. Any help? • I think I may have spotted a problem with my integration, I'll continue to try and work it out, but any other input is still welcome Jan 4, 2016 at 17:17
• Yes, the antiderivative should have a $t^2$ term. Jan 4, 2016 at 17:19
• Yes that was it, I worked it out. The devil's always in the details I guess Jan 4, 2016 at 17:32
• Slips of this kind are universal. The unfortunate thing is that sometimes they lead students who understand something perfectly well to doubt their understanding. Jan 4, 2016 at 17:36
• This is known as Inverse Transform Sampling. Its a good technique to generate random numbers from a given density, however its naive in a sense that the CDF must be calculated (which is not always possible or is too hard). The proof to why this works is here:en.wikipedia.org/wiki/Inverse_transform_sampling You should look into rejection sampling techniques to see an alternative way to generate random numbers. Jan 4, 2016 at 23:37
The cdf appears to be wrong. When $-1\leq x\leq 1$, \begin{align*} F_X(x) &= \int_{-1}^{x} \frac{1 + \alpha t}{2} dt\\ &=\int_{-1}^x \frac{1}{2}+\frac{\alpha}{2}t\,dt\\ &=\frac{1}{2}[x+1]+\frac{\alpha}{4}[x^2-1]\\ \end{align*} Other than that, your approach seems fine. • Thanks for your help! After completing the square I was able to solve it. I would upvote your answer, but I don't have enough reputation yet. Jan 4, 2016 at 17:33
• @ApprenticeOfMathematics Don't worry. It was pointed out in the comments while I was writing the answer, but I didn't see it. I'm glad you got it. – Em.
|
Jan 4, 2016 at 17:37
Comment: Demonstration in R with $\alpha = .2$ of answerbook result.
|
https://math.stackexchange.com/questions/1732225/evaluate-the-definite-integral-int-0-infty-fracx-sin-mxx2a2dx/1732246
| 1,558,347,924,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-22/segments/1558232255943.0/warc/CC-MAIN-20190520101929-20190520123929-00000.warc.gz
| 559,068,613
| 32,289
|
Evaluate the definite integral $\int_0^\infty \frac{x\sin mx}{x^2+a^2}dx$
Evaluate the definite integral $$\int_0^\infty \frac{x\sin mx}{x^2+a^2}dx \quad (m,a>0)$$
I tried a trigonometric substitution but did not get anywhere with that I think the multiple variable are throwing me off.
For first, we get rid of one parameter by substituting $x=az$. Then we have to compute: $$I(k) = \int_{0}^{+\infty}\frac{x}{x^2+1}\cdot\sin(kx)\,dx=\frac{1}{2}\int_{\mathbb{R}}\frac{x}{x^2+1}\cdot\sin(kx)\,dx$$ that is half the imaginary part of $\int_{\mathbb{R}}\frac{x e^{ikx}}{x^2+1}\,dx$. By computing the residue of the integrand function at $x=i$ it follows that: $$I(k) = \frac{\pi}{2}\cdot e^{-k},$$ hence:
$$\int_{0}^{+\infty}\frac{x\sin(mx)}{x^2+a^2}\,dx = \color{red}{\frac{\pi}{2}\cdot e^{-am}}.$$
• @Gaffney: the integration path should be a rectangle enclosing $x=i$, indeed. – Jack D'Aurizio Apr 7 '16 at 17:11
• @Gaffney: the situation is almost the same as the usual proof of $\int_{0}^{+\infty}\frac{\sin x}{x}\,dx=\frac{\pi}{2}$. Have a look at robjohn's proof here: math.stackexchange.com/questions/594641/… – Jack D'Aurizio Apr 7 '16 at 17:39
| 416
| 1,154
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.921875
| 4
|
CC-MAIN-2019-22
|
latest
|
en
| 0.648361
|
Evaluate the definite integral $\int_0^\infty \frac{x\sin mx}{x^2+a^2}dx$
Evaluate the definite integral $$\int_0^\infty \frac{x\sin mx}{x^2+a^2}dx \quad (m,a>0)$$
I tried a trigonometric substitution but did not get anywhere with that I think the multiple variable are throwing me off. For first, we get rid of one parameter by substituting $x=az$. Then we have to compute: $$I(k) = \int_{0}^{+\infty}\frac{x}{x^2+1}\cdot\sin(kx)\,dx=\frac{1}{2}\int_{\mathbb{R}}\frac{x}{x^2+1}\cdot\sin(kx)\,dx$$ that is half the imaginary part of $\int_{\mathbb{R}}\frac{x e^{ikx}}{x^2+1}\,dx$. By computing the residue of the integrand function at $x=i$ it follows that: $$I(k) = \frac{\pi}{2}\cdot e^{-k},$$ hence:
$$\int_{0}^{+\infty}\frac{x\sin(mx)}{x^2+a^2}\,dx = \color{red}{\frac{\pi}{2}\cdot e^{-am}}.$$
• @Gaffney: the integration path should be a rectangle enclosing $x=i$, indeed.
|
– Jack D'Aurizio Apr 7 '16 at 17:11
• @Gaffney: the situation is almost the same as the usual proof of $\int_{0}^{+\infty}\frac{\sin x}{x}\,dx=\frac{\pi}{2}$.
|
https://cs.stackexchange.com/questions/136951/divide-and-conquer-algorithm-for-a-gas-station-problem
| 1,721,000,622,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-30/segments/1720763514654.12/warc/CC-MAIN-20240714220017-20240715010017-00677.warc.gz
| 173,556,302
| 40,052
|
# Divide and conquer algorithm for a gas station problem
There is just one road connecting the n+1 cities c0, …, cn consecutively. You want to go from c0 to cn stopping at most s times to fill the tank of the car. There are gas stations at the cities, but none on the roads. The length of each road is ℓ0, …, ℓn−1. Which is the minimum range for your car? Suppose that you start with a full tank.
This has to be done in something like nlogn, because I already tried the n^2 approach and is not good. I don't know how to decide if I should refill at some point or not. For this inp:
5 0
100 300 500 200 400
5 1
100 300 500 200 400
5 2
100 300 500 200 400
5 3
100 300 500 200 400
5 4
100 300 500 200 400
The output should be: 1500 900 600 500 500
(Consider that the input is a sequence of n and s, followed by n naturals that represents l_i.
Source
• What's the context where you encountered this task? Please credit the source of all copied text.
– D.W.
Commented Mar 22, 2021 at 18:19
You can use binary search on the range of the car. Refill only if needed.
Here is the algorithm in more detail.
1. Let low = 0 and high = distance from c0 to cn. During the whole algorithm, range high is always big enough for a car to reach cn from c0, while range low is always not.
2. Repeat the following as long as low + 1 < high
1. mid = (low + high) /2
2. If range mid is big enough, set high = mid. Otherwise, set low = mid.
3. Return high, which must be the minimum range of the car.
How can we check if a given range is big enough?
Simple.
We will try driving a car with that range from c0 to cn. At each city, if the remaining range of the car is not enough for the car to reach the next city, refill the car to its full range. If we have refilled s+1 times before we have reached cn, that given range is not enough. Otherwise, it is enough.
It takes $$O(n)$$ time to check whether a given range is big enough.
The bisecting loop will run at most $$\lceil\log_2 m\rceil$$ iterations, where $$m$$ is the distance from c0 to cn.
So, the algorithm runs in $$O(n\log_2 m)$$-time.
• The low + 1 < high was neat! Any explanation on how you deducted it? Just this line, I got the explanation about everything else. Commented Mar 27, 2021 at 11:12
• I love that condition, too. I saw it somewhere on the web taught as a part of a classical way of binary search. Commented Mar 27, 2021 at 14:39
| 684
| 2,396
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.953125
| 4
|
CC-MAIN-2024-30
|
latest
|
en
| 0.926447
|
# Divide and conquer algorithm for a gas station problem
There is just one road connecting the n+1 cities c0, …, cn consecutively. You want to go from c0 to cn stopping at most s times to fill the tank of the car. There are gas stations at the cities, but none on the roads. The length of each road is ℓ0, …, ℓn−1. Which is the minimum range for your car? Suppose that you start with a full tank. This has to be done in something like nlogn, because I already tried the n^2 approach and is not good. I don't know how to decide if I should refill at some point or not. For this inp:
5 0
100 300 500 200 400
5 1
100 300 500 200 400
5 2
100 300 500 200 400
5 3
100 300 500 200 400
5 4
100 300 500 200 400
The output should be: 1500 900 600 500 500
(Consider that the input is a sequence of n and s, followed by n naturals that represents l_i. Source
• What's the context where you encountered this task? Please credit the source of all copied text. – D.W. Commented Mar 22, 2021 at 18:19
You can use binary search on the range of the car. Refill only if needed. Here is the algorithm in more detail. 1. Let low = 0 and high = distance from c0 to cn. During the whole algorithm, range high is always big enough for a car to reach cn from c0, while range low is always not. 2. Repeat the following as long as low + 1 < high
1. mid = (low + high) /2
2. If range mid is big enough, set high = mid. Otherwise, set low = mid. 3. Return high, which must be the minimum range of the car. How can we check if a given range is big enough? Simple. We will try driving a car with that range from c0 to cn. At each city, if the remaining range of the car is not enough for the car to reach the next city, refill the car to its full range. If we have refilled s+1 times before we have reached cn, that given range is not enough. Otherwise, it is enough. It takes $$O(n)$$ time to check whether a given range is big enough. The bisecting loop will run at most $$\lceil\log_2 m\rceil$$ iterations, where $$m$$ is the distance from c0 to cn.
|
So, the algorithm runs in $$O(n\log_2 m)$$-time.
|
https://math.stackexchange.com/questions/2969714/prove-the-unique-critical-points-exist-and-prove-there-is-absolute-minimum-at-th
| 1,713,791,708,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-18/segments/1712296818293.64/warc/CC-MAIN-20240422113340-20240422143340-00527.warc.gz
| 316,849,539
| 36,055
|
# Prove the unique critical points exist and prove there is absolute minimum at this point
Hello! I've scavenged the internet and asked two professors, with no avail. I don't want to ask my professors because he does not want to give us the answer, since he wants to put it on the final.
Anyway, story aside the picture is of problem number 55 from chapter 14.7(Maximum and Minimum values) of the book "Calculus" by Stewart.
In addition to the question given by the textbook, my professor told us to 1) Prove the unique critical points exist and 2)prove there is absolute minimum at this point.
The question was given in class but he said it will be on the final so our class is scrambling to find the solution. Other professors are having trouble with it to (or in hindsight we suspect it might be they just don't want to deal with it since we aren't their students).
We found a website that answers the textbook's question, but not for the questions give by the professor: http://www.slader.com/textbook/9780538497817-stewart-calculus-7th-edition/979/exercises/55/#
Any help would be appreciated. Thank you!
• Which part exactly are you having trouble with? Oct 24, 2018 at 21:34
• 1) Prove the unique critical points exist and 2)prove there is absolute minimum at this point. The thing is, I understand the chapter very well, it's this specific problem I'm having trouble understanding in the first place. I appreciate you want me to actually learn it, but I assure you I just need the answer and solution, if I see it done once I'll understand what's going on. Oct 25, 2018 at 1:28
• autarkaw.org/2012/09/03/… Perhaps this may be if interest? Oct 25, 2018 at 2:43
• Yes thank you! How do I mark yours as an answer? Oct 25, 2018 at 17:51
• Mine was a comment, so you can't accept it. Plus, it's not my original answer, it's someone else's that made the article, they should have the credit, not me Oct 25, 2018 at 17:54
Let $$f(m, b) = \sum_{i=1}^n (y_i - (m x_i + b))^2$$. The goal of the least squares method is to find $$m$$ and $$b$$ that minimize this function. (Note that the $$y_i$$ and $$x_i$$ are just fixed numbers, not variables.)
More specifically, if you try to compute the critical points of $$f$$ and find only one point, then you have shown existence and uniqueness.
| 596
| 2,293
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.65625
| 4
|
CC-MAIN-2024-18
|
latest
|
en
| 0.961803
|
# Prove the unique critical points exist and prove there is absolute minimum at this point
Hello! I've scavenged the internet and asked two professors, with no avail. I don't want to ask my professors because he does not want to give us the answer, since he wants to put it on the final. Anyway, story aside the picture is of problem number 55 from chapter 14.7(Maximum and Minimum values) of the book "Calculus" by Stewart. In addition to the question given by the textbook, my professor told us to 1) Prove the unique critical points exist and 2)prove there is absolute minimum at this point. The question was given in class but he said it will be on the final so our class is scrambling to find the solution. Other professors are having trouble with it to (or in hindsight we suspect it might be they just don't want to deal with it since we aren't their students). We found a website that answers the textbook's question, but not for the questions give by the professor: http://www.slader.com/textbook/9780538497817-stewart-calculus-7th-edition/979/exercises/55/#
Any help would be appreciated. Thank you! • Which part exactly are you having trouble with? Oct 24, 2018 at 21:34
• 1) Prove the unique critical points exist and 2)prove there is absolute minimum at this point. The thing is, I understand the chapter very well, it's this specific problem I'm having trouble understanding in the first place. I appreciate you want me to actually learn it, but I assure you I just need the answer and solution, if I see it done once I'll understand what's going on. Oct 25, 2018 at 1:28
• autarkaw.org/2012/09/03/… Perhaps this may be if interest? Oct 25, 2018 at 2:43
• Yes thank you! How do I mark yours as an answer? Oct 25, 2018 at 17:51
• Mine was a comment, so you can't accept it.
|
Plus, it's not my original answer, it's someone else's that made the article, they should have the credit, not me Oct 25, 2018 at 17:54
Let $$f(m, b) = \sum_{i=1}^n (y_i - (m x_i + b))^2$$.
|
https://matheducators.stackexchange.com/questions/10933/how-to-explain-fractional-terms
| 1,713,838,538,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-18/segments/1712296818452.78/warc/CC-MAIN-20240423002028-20240423032028-00032.warc.gz
| 340,019,961
| 39,811
|
# How to explain "fractional terms"?
as I can see there are mainly two ways to introduce fractional terms. Two examples to demonstrate the two variants:
1. $\frac{a^2+3}{a}; \frac{3}{2c}$
2. $T(a) = \frac{a^2+3}{a}; T(c) = \frac{3}{2c}$.
In fact, the word "function" or "functional term" or "equation with variables" has not been learned at this point, but I think that variant two is able to demonstrate better that any term can be seen as a "number machine", in which you "throw" a number and get a certain result. I could imagine that the first variant leads to young learners being unsure what exactly to do with this experssion.
I made a quick research in some school books here (in Germany) and found those two variants. So even though this is really just a small difference, do you have any opinion or experience?
• What age group do you teach? I use fraction for just numerical and rational function for the functions of polynomials divided by polynomials. Apr 28, 2016 at 15:38
• It is (sometimes) helpful to distinguish "rational expressions" from "rational functions" for precisely the same reason that it is (sometimes) helpful to distinguish between "polynomials" and "polynomial functions". Apr 28, 2016 at 16:04
• Personally, I would not write $T(a) = \frac{a^2+3}{a}$ without first having the notion of "function". Apr 28, 2016 at 16:20
• "I could imagine that the first variant leads to young learners being unsure what exactly to do with this experssion." -- Well they shouldn't think that there's automatically something "to do" on any piece of math without a natural-language direction or question. Nov 2, 2016 at 3:04
• I don't see how fractions are relevant to the question; the same question could be asked about $a+1$.
– user797
Nov 2, 2016 at 12:20
| 467
| 1,779
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.921875
| 4
|
CC-MAIN-2024-18
|
latest
|
en
| 0.957591
|
# How to explain "fractional terms"? as I can see there are mainly two ways to introduce fractional terms. Two examples to demonstrate the two variants:
1. $\frac{a^2+3}{a}; \frac{3}{2c}$
2. $T(a) = \frac{a^2+3}{a}; T(c) = \frac{3}{2c}$. In fact, the word "function" or "functional term" or "equation with variables" has not been learned at this point, but I think that variant two is able to demonstrate better that any term can be seen as a "number machine", in which you "throw" a number and get a certain result. I could imagine that the first variant leads to young learners being unsure what exactly to do with this experssion. I made a quick research in some school books here (in Germany) and found those two variants. So even though this is really just a small difference, do you have any opinion or experience? • What age group do you teach? I use fraction for just numerical and rational function for the functions of polynomials divided by polynomials. Apr 28, 2016 at 15:38
• It is (sometimes) helpful to distinguish "rational expressions" from "rational functions" for precisely the same reason that it is (sometimes) helpful to distinguish between "polynomials" and "polynomial functions".
|
Apr 28, 2016 at 16:04
• Personally, I would not write $T(a) = \frac{a^2+3}{a}$ without first having the notion of "function".
|
https://math.stackexchange.com/questions/1518062/let-sn-k-denote-the-signless-stirling-numbers-of-the-first-kind-prove-that
| 1,571,131,981,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-43/segments/1570986657949.34/warc/CC-MAIN-20191015082202-20191015105702-00054.warc.gz
| 588,159,041
| 30,656
|
# Let $s(n,k)$ denote the signless Stirling numbers of the first kind. Prove that…
Let $s(n,k)$ denote the signless Stirling numbers of the first kind. Prove that:
$$s(n,2) = (n-1)!(1 + \frac{1}{2} + \frac{1}{3} +...+ \frac{1}{n-1})$$
-I haven't dealt with Taylor series expansion in a long time and not quite sure how (brand new to)Stirling numbers would play into this proof.Any help is appreciated.
HINT: If you split $[n]$ into two cycles, one of length $k$ and the other of length $n-k$, there are $\binom{n}k$ ways to choose the elements of the $k$-cycle, $(k-1)!$ ways to arrange them in a cycle, and $(n-k-1)!$ ways to arrange the remaining elements in a cycle. That gives you a total of
$$\binom{n}k(k-1)!(n-k-1)!$$
permutations. Now sum over the possible values of $k$. Be careful, though: you’ll be counting every permutation twice.
$$\frac{n}{k(n-k)}=\frac1k+\frac1{n-k}\;.$$
| 281
| 891
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.6875
| 4
|
CC-MAIN-2019-43
|
latest
|
en
| 0.762026
|
# Let $s(n,k)$ denote the signless Stirling numbers of the first kind. Prove that…
Let $s(n,k)$ denote the signless Stirling numbers of the first kind. Prove that:
$$s(n,2) = (n-1)! (1 + \frac{1}{2} + \frac{1}{3} +...+ \frac{1}{n-1})$$
-I haven't dealt with Taylor series expansion in a long time and not quite sure how (brand new to)Stirling numbers would play into this proof.Any help is appreciated. HINT: If you split $[n]$ into two cycles, one of length $k$ and the other of length $n-k$, there are $\binom{n}k$ ways to choose the elements of the $k$-cycle, $(k-1)!$ ways to arrange them in a cycle, and $(n-k-1)!$ ways to arrange the remaining elements in a cycle. That gives you a total of
$$\binom{n}k(k-1)! (n-k-1)!$$
permutations. Now sum over the possible values of $k$. Be careful, though: you’ll be counting every permutation twice.
|
$$\frac{n}{k(n-k)}=\frac1k+\frac1{n-k}\;.$$
|
http://math.stackexchange.com/questions/128369/would-like-some-pointers-on-this-geometry-problem
| 1,469,591,945,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257825365.1/warc/CC-MAIN-20160723071025-00054-ip-10-185-27-174.ec2.internal.warc.gz
| 153,846,424
| 17,989
|
# Would like some pointers on this geometry problem
I'm working on this problem (not homework):
Let $ABC$ be a triangle and $D$ a point on the side $AC$. If $C\hat{B}D-A\hat{B}D = 60°$, $B\hat{D}C = 30°$ and $AB \cdot BC =BD^2$, find the angles of the triangle $ABC$.
I've been working on it for a while, and have managed to express all the angles in terms of a single variable, but I have no idea how to use the relationship between the sides. I've tried to construct right triangles to use trigonometry or Pythagoras' Theorem, but it just ends up introducing new sides I don't care about. I drew a picture to make things clearer for me, but sadly I don't have a scanner or a camera right now, so I can't upload it.
It would be great if you could give me a few tips instead of the full answer; I'd like to figure it out (mostly) on my own.
-
Drop a perpendicular from $B$ to the side $AC$ and obtain $E$. You may assume $BE=1$. The triangle $DBE$ is a $30^\circ/60^\circ/90^\circ$ triangle, whence $BD=2$. Let $\beta:=\angle(ABD)=\angle(EBC)\$. Then $BC$ and $BA$ can be expressed in terms of $\beta$, resp., $\beta+60^\circ$, and the condition $AB\cdot BC = BD^2$ amounts to $$\cos\beta\cdot \cos(\beta+60^\circ)={1\over4}\ .$$ Plot the left side of this equation as a function of $\beta$, and you will get a conjecture about $\beta$ which is easy to prove.
-
Sorry, it's $AB \cdot BC = BD^2$, the blockquote messed it up. But the idea of assigning some random value to a side might be helpful. – Javier Apr 5 '12 at 15:19
Also, how did you get $\cos \beta \cdot \cos (\beta + 60°) \le \frac1{4}$? – Javier Apr 5 '12 at 15:22
This was helpful, thank you! – Javier Apr 5 '12 at 18:50
$$\cos\beta\cdot \cos(\beta+60^\circ)={1\over4}\ .$$
implies
$$2\cos\beta\cdot \cos(\beta+60^\circ)={1\over2}\ .$$
so
$$\cos60^\circ + \cos(2\cdot\beta+60^\circ)={1\over2}\ .$$
so
$$\cos(2\cdot\beta+60^\circ)=0\ .$$
So $$(2\cdot\beta+60^\circ) = 90^\circ$$
Thus $\beta$ is $15^\circ$.
Thus in ABC, angles A, B and C are 15, 90 and 75 degrees respectively.
-
| 679
| 2,060
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.25
| 4
|
CC-MAIN-2016-30
|
latest
|
en
| 0.864727
|
# Would like some pointers on this geometry problem
I'm working on this problem (not homework):
Let $ABC$ be a triangle and $D$ a point on the side $AC$. If $C\hat{B}D-A\hat{B}D = 60°$, $B\hat{D}C = 30°$ and $AB \cdot BC =BD^2$, find the angles of the triangle $ABC$. I've been working on it for a while, and have managed to express all the angles in terms of a single variable, but I have no idea how to use the relationship between the sides. I've tried to construct right triangles to use trigonometry or Pythagoras' Theorem, but it just ends up introducing new sides I don't care about. I drew a picture to make things clearer for me, but sadly I don't have a scanner or a camera right now, so I can't upload it. It would be great if you could give me a few tips instead of the full answer; I'd like to figure it out (mostly) on my own. -
Drop a perpendicular from $B$ to the side $AC$ and obtain $E$. You may assume $BE=1$. The triangle $DBE$ is a $30^\circ/60^\circ/90^\circ$ triangle, whence $BD=2$. Let $\beta:=\angle(ABD)=\angle(EBC)\$. Then $BC$ and $BA$ can be expressed in terms of $\beta$, resp., $\beta+60^\circ$, and the condition $AB\cdot BC = BD^2$ amounts to $$\cos\beta\cdot \cos(\beta+60^\circ)={1\over4}\ .$$ Plot the left side of this equation as a function of $\beta$, and you will get a conjecture about $\beta$ which is easy to prove. -
Sorry, it's $AB \cdot BC = BD^2$, the blockquote messed it up. But the idea of assigning some random value to a side might be helpful. – Javier Apr 5 '12 at 15:19
Also, how did you get $\cos \beta \cdot \cos (\beta + 60°) \le \frac1{4}$? – Javier Apr 5 '12 at 15:22
This was helpful, thank you!
|
– Javier Apr 5 '12 at 18:50
$$\cos\beta\cdot \cos(\beta+60^\circ)={1\over4}\ .$$
implies
$$2\cos\beta\cdot \cos(\beta+60^\circ)={1\over2}\ .$$
so
$$\cos60^\circ + \cos(2\cdot\beta+60^\circ)={1\over2}\ .$$
so
$$\cos(2\cdot\beta+60^\circ)=0\ .$$
So $$(2\cdot\beta+60^\circ) = 90^\circ$$
Thus $\beta$ is $15^\circ$.
|
https://math.stackexchange.com/questions/4225909/finding-the-sum-of-the-series-12-2-%C3%97-22-32-2-%C3%97-42-52-2-%C3%97-62
| 1,716,468,466,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-22/segments/1715971058625.16/warc/CC-MAIN-20240523111540-20240523141540-00805.warc.gz
| 339,060,011
| 37,000
|
# Finding the sum of the series $1^2 + 2 × 2^2 + 3^2 + 2 × 4^2 + 5^2 + 2 × 6^2 + . . . + 2(n − 1)^2 + n^2 ,$
The following is a question which has been bugging me for quite a while,
Find the sum of the series $$1^2 + 2 × 2^2 + 3^2 + 2 × 4^2 + 5^2 + 2 × 6^2 + . . . + 2(n − 1)^2 + n^2 ,$$ Where $$n$$ is odd
I started by denoting this entire series with $$S$$, from there it is apparent that $$S$$ infact consists of two lesser series I shall call;
1. $$S_A$$ The sum of the squares of odd natural numbers up till $$n^2$$
2. $$S_B$$ Twice the sum of the squares of even natural numbers up till $$(n-1)^2$$
I resolved this would be easier to tackle by noting that $$S_A + \frac{1}{2}S_B = \sum_{r=1}^{n} r^2$$ Which is just the sum of the squares of the first $$n$$ natural numbers $$\therefore S_A +\frac{1}{2}S_B=\frac{n}{6}(n+1)(2n+1)$$
Leaving only the value of $$\frac{1}{2}S_B$$ to be found, this is where I am currently facing difficulty as I am unsure on whether my working is correct;
For the sum of the squares of the first n even natural numbers;
$$2^2 + 4^2 .... (2n)^2=2^2\sum_{r=1}^{n} r^2$$
$$\implies \frac{2}{3}n(n+1)(2n+1)$$
Hence the sum of the first $$n-1$$ even natural numbers should be
$$\frac{2}{3}n(n-1)(2n-1)$$
And
$$S= \frac{n}{6}(n+1)(2n+1) + \frac{2}{3}n(n-1)(2n-1)$$
$$\therefore S= \frac{1}{6}n(10n^2 -9n + 5n)$$ However the correct answer is
$$\frac{1}{2}n^2(n+1)$$
Where has my working gone wrong and how would I arrive at the correct answer?
The given sum is equal to \begin{align} 1^2 + 2 × 2^2 + 3^2 + &2 × 4^2 + 5^2 + 2 × 6^2 + \dots + 2(n − 1)^2 + n^2\\ &=1^2 + 2^2 + 3^2 + 4^2 + 5^2 + 6^2 + \dots + (n − 1)^2 + n^2\\ &\qquad\;+2^2 + 4^2 + 6^2 + \dots + (n − 1)^2\\ &=\sum_{k=1}^n k^2+\sum_{k=1}^{m}(2k)^2=\sum_{k=1}^n k^2+4\sum_{k=1}^{m}k^2\\ &=\frac{n(n+1)(2n+1)}{6}+4\frac{m(m+1)(2m+1)}{6}\\&=\frac{n(n+1)(2n+1)}{6}+\frac{(n-1)(n+1)n}{6}\\ &=\frac{n(n+1)(2n+1+n-1)}{6}=\frac{n^2(n+1)}{2} \end{align} where $$m=\frac{n-1}{2}$$. Note that in the sum involving the even squares there are $$m$$ terms (not $$(n-1)$$).
• Can you explain why the second partial summation is up to $m=\frac{n-1}{2}$ and not $n-1$? Aug 16, 2021 at 16:42
• @Filthyscrub The last term in the second sum is $(n-1)^2=(2m)^2$ Aug 16, 2021 at 16:45
As implicitely pointed out in the given answer, your mistake is due to the fact that for the sum of the squares of even natural numbers up to $$n-1$$ we need to consider the following:
$$2^2 + 4^2 .... (n-1)^2=2^2\left(1+2^2+\dots+\left(\frac{n-1}2\right)^2\right)$$
and not
$$2^2 + 4^2 .... (2n)^2=2^2\sum_{r=1}^{n} r^2$$
which is the sum of the squares of even natural numbers up to $$2n$$.
Let f(n) be the sum of squares from 1 up to $$n^2$$. Then the answer to your question is f(n) + 4 f((n-1)/2). Find a formula for f(n) and substitute it. It should be close to 1.5 f(n) since you are adding about half of the terms twice.
| 1,217
| 2,909
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 28, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.34375
| 4
|
CC-MAIN-2024-22
|
latest
|
en
| 0.781761
|
# Finding the sum of the series $1^2 + 2 × 2^2 + 3^2 + 2 × 4^2 + 5^2 + 2 × 6^2 + . . . + 2(n − 1)^2 + n^2 ,$
The following is a question which has been bugging me for quite a while,
Find the sum of the series $$1^2 + 2 × 2^2 + 3^2 + 2 × 4^2 + 5^2 + 2 × 6^2 + . . . + 2(n − 1)^2 + n^2 ,$$ Where $$n$$ is odd
I started by denoting this entire series with $$S$$, from there it is apparent that $$S$$ infact consists of two lesser series I shall call;
1. $$S_A$$ The sum of the squares of odd natural numbers up till $$n^2$$
2. $$S_B$$ Twice the sum of the squares of even natural numbers up till $$(n-1)^2$$
I resolved this would be easier to tackle by noting that $$S_A + \frac{1}{2}S_B = \sum_{r=1}^{n} r^2$$ Which is just the sum of the squares of the first $$n$$ natural numbers $$\therefore S_A +\frac{1}{2}S_B=\frac{n}{6}(n+1)(2n+1)$$
Leaving only the value of $$\frac{1}{2}S_B$$ to be found, this is where I am currently facing difficulty as I am unsure on whether my working is correct;
For the sum of the squares of the first n even natural numbers;
$$2^2 + 4^2 .... (2n)^2=2^2\sum_{r=1}^{n} r^2$$
$$\implies \frac{2}{3}n(n+1)(2n+1)$$
Hence the sum of the first $$n-1$$ even natural numbers should be
$$\frac{2}{3}n(n-1)(2n-1)$$
And
$$S= \frac{n}{6}(n+1)(2n+1) + \frac{2}{3}n(n-1)(2n-1)$$
$$\therefore S= \frac{1}{6}n(10n^2 -9n + 5n)$$ However the correct answer is
$$\frac{1}{2}n^2(n+1)$$
Where has my working gone wrong and how would I arrive at the correct answer? The given sum is equal to \begin{align} 1^2 + 2 × 2^2 + 3^2 + &2 × 4^2 + 5^2 + 2 × 6^2 + \dots + 2(n − 1)^2 + n^2\\ &=1^2 + 2^2 + 3^2 + 4^2 + 5^2 + 6^2 + \dots + (n − 1)^2 + n^2\\ &\qquad\;+2^2 + 4^2 + 6^2 + \dots + (n − 1)^2\\ &=\sum_{k=1}^n k^2+\sum_{k=1}^{m}(2k)^2=\sum_{k=1}^n k^2+4\sum_{k=1}^{m}k^2\\ &=\frac{n(n+1)(2n+1)}{6}+4\frac{m(m+1)(2m+1)}{6}\\&=\frac{n(n+1)(2n+1)}{6}+\frac{(n-1)(n+1)n}{6}\\ &=\frac{n(n+1)(2n+1+n-1)}{6}=\frac{n^2(n+1)}{2} \end{align} where $$m=\frac{n-1}{2}$$. Note that in the sum involving the even squares there are $$m$$ terms (not $$(n-1)$$). • Can you explain why the second partial summation is up to $m=\frac{n-1}{2}$ and not $n-1$?
|
Aug 16, 2021 at 16:42
• @Filthyscrub The last term in the second sum is $(n-1)^2=(2m)^2$ Aug 16, 2021 at 16:45
As implicitely pointed out in the given answer, your mistake is due to the fact that for the sum of the squares of even natural numbers up to $$n-1$$ we need to consider the following:
$$2^2 + 4^2 .... (n-1)^2=2^2\left(1+2^2+\dots+\left(\frac{n-1}2\right)^2\right)$$
and not
$$2^2 + 4^2 .... (2n)^2=2^2\sum_{r=1}^{n} r^2$$
which is the sum of the squares of even natural numbers up to $$2n$$.
|
https://math.stackexchange.com/questions/1986675/proving-bipartite-graph-properties
| 1,560,884,973,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-26/segments/1560627998813.71/warc/CC-MAIN-20190618183446-20190618205446-00493.warc.gz
| 529,896,548
| 35,018
|
# Proving bipartite graph properties
Fix a board where the height and width are greater than 2. A valid walk on a board is to start on one square and move to a vertically or horizontally adjacent square, so diagonal traversals are not allowed. A tour is a valid walk such that each square is touched exactly once and the walk begins and ends on the same square. Prove that if either the height or width is even, a tour exists. Also prove that if p and q are odd, there is no tour.
First, I prove that this "board" is merely a bipartite graph. Let $V_{i,j}$ be a cell on the graph where $i,j \ge 0$ and $i,j \in \mathbb{Z}^+$. If $i$ is even and $j$ is even, color the cell $c_1$. If $i$ is even and $j$ is odd, color the cell $c_2$. If $i$ is odd and $j$ is odd, then color the cell $c_1$. If $i$ is odd and $j$ is even, color the cell $c_2$. Note that the board is two-colored, which is a bipartite graph.
Claim 2: If the height and width are odd, a tour doesn't exist.
1. Assume by contradiction that a tour exists.
2. If the height and width are both odd, the total number of cells (or vertices) is also odd.
3. A tour on a graph of odd vertices would be of odd-length.
4. This is a contradiction because bipartite graphs only contain even-length cycles, so such a tour cannot exist.
I'm unsure of a couple of things, which I hope someone can clear up:
1. Am I interpreting this question right by converting it into a bipartite graph?
2. Is the proof for Claim 2 correct? If not, how can I optimize it?
3. How would I approach proving Claim 1? I was thinking that if there are an even number of vertices in the graph and the two bipartitions have an equal number of vertices in them, a tour might exist. However, I'm not sure how to formulate this into better words.
• What you’ve done so far is fine. For the rest, see my answer to this question. – Brian M. Scott Oct 26 '16 at 21:35
• The coloring can be used to demonstrate that this board can be regarded as a bipartite graph, and you have used that to good effect to prove the second statement. For the first statement, you might be able to simply demonstrate how to construct a valid tour. – Joffan Oct 26 '16 at 21:35
| 579
| 2,183
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.734375
| 4
|
CC-MAIN-2019-26
|
latest
|
en
| 0.924387
|
# Proving bipartite graph properties
Fix a board where the height and width are greater than 2. A valid walk on a board is to start on one square and move to a vertically or horizontally adjacent square, so diagonal traversals are not allowed. A tour is a valid walk such that each square is touched exactly once and the walk begins and ends on the same square. Prove that if either the height or width is even, a tour exists. Also prove that if p and q are odd, there is no tour. First, I prove that this "board" is merely a bipartite graph. Let $V_{i,j}$ be a cell on the graph where $i,j \ge 0$ and $i,j \in \mathbb{Z}^+$. If $i$ is even and $j$ is even, color the cell $c_1$. If $i$ is even and $j$ is odd, color the cell $c_2$. If $i$ is odd and $j$ is odd, then color the cell $c_1$.
|
If $i$ is odd and $j$ is even, color the cell $c_2$.
|
https://math.stackexchange.com/questions/2852964/symmetric-group-and-the-empty-set
| 1,695,438,748,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2023-40/segments/1695233506479.32/warc/CC-MAIN-20230923030601-20230923060601-00516.warc.gz
| 428,600,247
| 34,764
|
# Symmetric group and the empty set
I have a set $X=\left \{1,...,n\right \}$ and the symmetric group on a set of n elements has order n!.
When $n=0$,why do we have $S_{\left \{1,...,0\right \}}=S_{\varnothing }$ ?
I know that n=0 means there are no elements in the set and $0!=1$ but why do we have ${\left \{1,...,0\right \}}={\varnothing }$? Is it not simply ${\varnothing }=\left \{\right \}$?
Thank you.
Your $X$ makes no sense when $n=0$ so it must be defined separately (if you really wanted to, but I don’t know anyone who cares about $S_0$). The natural thing is for $S_0$ to stand for the group of all bijections on the set with no object, the empty set. There is exactly one such function (the empty function) so $S_0$ is trivial, just like $S_1$. And, $0!=1!=1$ so everything is coherent.
• I was wondering what the teacher meant when they wrote $S_{\left \{1,...,0\right \}}=S_{\varnothing }$, it is only a shortened notation then I guess.I thought it was strange.Thanks again. Jul 16, 2018 at 1:11
Think of $X = \{1,...,n\}$ as being shorthand for $X = \{i \in \mathbb{Z} \mid 1 \le i \le n\}$.
So if $n=0$ then $X=\emptyset$.
• I understand that but the notation ${\left \{1,...,0\right \}}={\varnothing }$ is what I have a problem with.The set ${\left \{1,...,0\right \}}$ is not empty but I'm guesssing it 's not literal. Jul 16, 2018 at 1:33
• The notation $\{1,...,n\}$ is kind of a bad notation, so the way to understand it in the "exceptional" cases is to rewrite it so that it becomes good notation. That's what I was trying to say. Jul 16, 2018 at 1:41
| 516
| 1,580
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.71875
| 4
|
CC-MAIN-2023-40
|
latest
|
en
| 0.895095
|
# Symmetric group and the empty set
I have a set $X=\left \{1,...,n\right \}$ and the symmetric group on a set of n elements has order n!. When $n=0$,why do we have $S_{\left \{1,...,0\right \}}=S_{\varnothing }$ ? I know that n=0 means there are no elements in the set and $0!=1$ but why do we have ${\left \{1,...,0\right \}}={\varnothing }$? Is it not simply ${\varnothing }=\left \{\right \}$? Thank you. Your $X$ makes no sense when $n=0$ so it must be defined separately (if you really wanted to, but I don’t know anyone who cares about $S_0$). The natural thing is for $S_0$ to stand for the group of all bijections on the set with no object, the empty set. There is exactly one such function (the empty function) so $S_0$ is trivial, just like $S_1$. And, $0!=1!=1$ so everything is coherent. • I was wondering what the teacher meant when they wrote $S_{\left \{1,...,0\right \}}=S_{\varnothing }$, it is only a shortened notation then I guess.I thought it was strange.Thanks again. Jul 16, 2018 at 1:11
Think of $X = \{1,...,n\}$ as being shorthand for $X = \{i \in \mathbb{Z} \mid 1 \le i \le n\}$. So if $n=0$ then $X=\emptyset$.
|
• I understand that but the notation ${\left \{1,...,0\right \}}={\varnothing }$ is what I have a problem with.The set ${\left \{1,...,0\right \}}$ is not empty but I'm guesssing it 's not literal.
|
http://math.stackexchange.com/questions/215875/symmetric-matrix-with-a-ij-0-for-all-i-j-1-has-all-eigenvalues-of?answertab=votes
| 1,462,131,942,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-18/segments/1461860116886.38/warc/CC-MAIN-20160428161516-00001-ip-10-239-7-51.ec2.internal.warc.gz
| 177,507,244
| 17,796
|
# Symmetric matrix with $a_{ij} = 0$ for all $|i - j| > 1$ has all eigenvalues of multiplicity $1$
1 . Let $A = (a_{ij})$ be a real $n \times n$ matrix such that $a_{ij} = a_{ji}$ for all $1 \leq i,j \leq n$ and $a_{ij} = 0$ for $|i-j|>1$. Moreover $a_{ij}$ is non-zero for all $i$,$j$ satisfying $|i-j| = 1$. Show that all the eigenvalues of $A$ are of multiplicity $1$.
2 . Give examples of 2 real $n \times n$ matrices $X = (x_{ij})$, $Y = (y_{ij})$ where $x_{ij} = x_{ji}$ and $y_{ij} = y_{ji}$ for all $1 \leq i,j \leq n$ so that $xX +yY$ has $n$ non-repeated eigenvalues for all real numbers $x$, $y$ where $x$, $y$ are not zero simultaneously.
Thank you for any help.
-
for part(2), let X = diag(1,2,0,3) and Y = diag(4,0,5,6). – Inquest Oct 17 '12 at 20:29
@Inquest What if x=y=1? – Ester Oct 17 '12 at 20:33
In part 1. you have the Jacobi matrices. This property is well-known. – PAD Oct 17 '12 at 20:41
@Timothy. Argh. You are right. Ignore my comment. – Inquest Oct 17 '12 at 20:45
Can anyone please help me with the second one ? – Ester Oct 17 '12 at 21:20
2) Let $X,Y$ be linear independent real symmetric matrices of order 2 and trace zero.
Let $Z$ be any linear combination of X and Y. Notice that $Z$ has the same properties. Therefore its eigenvalues $(Z$ is diagonalizable since it is real symmetric$)$ have oppositive signs $($their sum must be zero$)$, unless $Z$ is the zero matrix. But it occurs only if $Z$ is the trivial combination of $X$ and $Y$.
Now for matrices of order $2k$ $(2k+1)$, instead of $X$ and $Y$, use $F(X)$ and $F(Y)$ $(G(X)$ and $G(Y))$.
$F(X)=\left(\begin{array}{cccc} X & 0 & \dots & 0 \\ 0 & 2X & \dots & 0 \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0 & 0 & kX \end{array} \right)_{2k\times2k}$ $G(X)=\left(\begin{array}{cc} F(X) & 0_{2k\times 1} \\ 0_{1\times 2k} & 0_{1\times 1} \end{array} \right)_{2k+1\times 2k+1}$
If $Z$ is any linear combination of $X$ and $Y$ then $F(Z)$ $(G(Z))$ is be the respective linear combination of $F(X)$ and $F(Y)$ $(G(X)$ and $G(Y))$.
If $a,-a$ are the eigenvalues of $Z$ then $a,-a,2a,-2a,\dots,ka,-ka$ are the eigenvalues of $F(Z)$ $(a,-a,2a,-2a,\dots,ka,-ka,0$ are the eingevalues of $G(Z))$.
-
Hints.
1. Let $\lambda$ be an eigenvalue of $A$. As $A$ is diagonalisable, can you relate the geometric multiplicity of $\lambda$ to the rank of $\lambda I-A$? Now, let $B$ be the submatrix obtained by deleting the first row and last column of $\lambda I-A$. What is the rank of $B$? Then, what is the rank of $\lambda I-A$?
2. Split a matrix in the form of $A$ in part 1 into two appropriate symmetric matrices!
-
| 961
| 2,603
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.671875
| 4
|
CC-MAIN-2016-18
|
latest
|
en
| 0.647146
|
# Symmetric matrix with $a_{ij} = 0$ for all $|i - j| > 1$ has all eigenvalues of multiplicity $1$
1 . Let $A = (a_{ij})$ be a real $n \times n$ matrix such that $a_{ij} = a_{ji}$ for all $1 \leq i,j \leq n$ and $a_{ij} = 0$ for $|i-j|>1$. Moreover $a_{ij}$ is non-zero for all $i$,$j$ satisfying $|i-j| = 1$. Show that all the eigenvalues of $A$ are of multiplicity $1$. 2 . Give examples of 2 real $n \times n$ matrices $X = (x_{ij})$, $Y = (y_{ij})$ where $x_{ij} = x_{ji}$ and $y_{ij} = y_{ji}$ for all $1 \leq i,j \leq n$ so that $xX +yY$ has $n$ non-repeated eigenvalues for all real numbers $x$, $y$ where $x$, $y$ are not zero simultaneously. Thank you for any help. -
for part(2), let X = diag(1,2,0,3) and Y = diag(4,0,5,6). – Inquest Oct 17 '12 at 20:29
@Inquest What if x=y=1? – Ester Oct 17 '12 at 20:33
In part 1. you have the Jacobi matrices. This property is well-known. – PAD Oct 17 '12 at 20:41
@Timothy. Argh. You are right. Ignore my comment. – Inquest Oct 17 '12 at 20:45
Can anyone please help me with the second one ? – Ester Oct 17 '12 at 21:20
2) Let $X,Y$ be linear independent real symmetric matrices of order 2 and trace zero. Let $Z$ be any linear combination of X and Y. Notice that $Z$ has the same properties. Therefore its eigenvalues $(Z$ is diagonalizable since it is real symmetric$)$ have oppositive signs $($their sum must be zero$)$, unless $Z$ is the zero matrix. But it occurs only if $Z$ is the trivial combination of $X$ and $Y$. Now for matrices of order $2k$ $(2k+1)$, instead of $X$ and $Y$, use $F(X)$ and $F(Y)$ $(G(X)$ and $G(Y))$.
|
$F(X)=\left(\begin{array}{cccc} X & 0 & \dots & 0 \\ 0 & 2X & \dots & 0 \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0 & 0 & kX \end{array} \right)_{2k\times2k}$ $G(X)=\left(\begin{array}{cc} F(X) & 0_{2k\times 1} \\ 0_{1\times 2k} & 0_{1\times 1} \end{array} \right)_{2k+1\times 2k+1}$
If $Z$ is any linear combination of $X$ and $Y$ then $F(Z)$ $(G(Z))$ is be the respective linear combination of $F(X)$ and $F(Y)$ $(G(X)$ and $G(Y))$.
|
http://math.stackexchange.com/questions/560605/which-step-is-wrong-in-this-proof
| 1,469,814,654,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2016-30/segments/1469257831770.41/warc/CC-MAIN-20160723071031-00032-ip-10-185-27-174.ec2.internal.warc.gz
| 157,715,598
| 19,066
|
# Which step is wrong in this proof
Proof: Consider the quadratic equation $x^2+x+1=0$. Then, we can see that $x^2=−x−1$. Assuming that $x$ is not zero (which it clearly isn't, from the equation) we can divide by $x$ to give $$x=−1−\frac{1}{x}$$ Substitute this back into the $x$ term in the middle of the original equation, so $$x^2+(−1−\frac{1}{x})+1=0$$ This reduces to $$x^2=\frac{1}{x}$$ So, $x^3=1$, so $x=1$ is the solution. Substituting back into the equation for $x$ gives $1^2+1+1=0$
Therefore, $3=0$.
What happened?
-
Essentially you eliminated the $x=1$ solution from $x^3 = 1$ and then wondered where it went... – Benjamin Dickman Nov 10 '13 at 1:35
Up to $x^3=1$ everything is fine. This allows you to conclude that $x\in \{z\in \Bbb C\colon z^3=1\}$. Since $\{z\in \Bbb C\colon z^3=1\}=\left\{\dfrac{-1 + i\sqrt 3}{2}, \dfrac{-1 - i\sqrt 3}{2},1\right\}$, then $x$ is one of the elements of this set.
You made a reasoning consisting of logical consequences, not one of logical equivalences. That's why you can tell that $x\in \{z\in \Bbb C\colon z^3=1\}$, but you can't say which one is it.
See this for a similar issue.
An even more simpler version of your mistake is this: suppose $x^2=1$, then $x=1$.
You can convince yourself that this is wrong and that you did the same thing in your question.
-
in your answer there are three real roots. – daulomb Nov 10 '13 at 2:54
@user40615 Thanks, tbao10 already fixed it. – Git Gud Nov 10 '13 at 9:02
@tbao10 Thanks for fixing the typo. – Git Gud Nov 10 '13 at 9:02
What you have proved is that there is no real number $x$ such that $x^2+x+1=0$.
On the other hand, the two complex solutions of $x^2+x+1=0$ do indeed satisfy $x^3=1$.
-
if $x^2+x+1=0$ then $(x-1)(x^2+x+1)=0$ thus $x^3-1=0$, thus: $x^3=1$.
if ... then .... is not a logical equivalence but only a logical implication.
-
$x^2+x+1=0$ is a quadratic. This means it has two answers.
$x^3=1$ is a cubic. This means it has three answers. Therefore if you solve for $x$, then one of the answers you get won't fit the original equation.
This "extraneous" solution just happens to be $x=1$.
-
| 732
| 2,127
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.53125
| 5
|
CC-MAIN-2016-30
|
latest
|
en
| 0.886936
|
# Which step is wrong in this proof
Proof: Consider the quadratic equation $x^2+x+1=0$. Then, we can see that $x^2=−x−1$. Assuming that $x$ is not zero (which it clearly isn't, from the equation) we can divide by $x$ to give $$x=−1−\frac{1}{x}$$ Substitute this back into the $x$ term in the middle of the original equation, so $$x^2+(−1−\frac{1}{x})+1=0$$ This reduces to $$x^2=\frac{1}{x}$$ So, $x^3=1$, so $x=1$ is the solution. Substituting back into the equation for $x$ gives $1^2+1+1=0$
Therefore, $3=0$. What happened? -
Essentially you eliminated the $x=1$ solution from $x^3 = 1$ and then wondered where it went... – Benjamin Dickman Nov 10 '13 at 1:35
Up to $x^3=1$ everything is fine. This allows you to conclude that $x\in \{z\in \Bbb C\colon z^3=1\}$. Since $\{z\in \Bbb C\colon z^3=1\}=\left\{\dfrac{-1 + i\sqrt 3}{2}, \dfrac{-1 - i\sqrt 3}{2},1\right\}$, then $x$ is one of the elements of this set. You made a reasoning consisting of logical consequences, not one of logical equivalences. That's why you can tell that $x\in \{z\in \Bbb C\colon z^3=1\}$, but you can't say which one is it. See this for a similar issue. An even more simpler version of your mistake is this: suppose $x^2=1$, then $x=1$. You can convince yourself that this is wrong and that you did the same thing in your question. -
in your answer there are three real roots. – daulomb Nov 10 '13 at 2:54
@user40615 Thanks, tbao10 already fixed it. – Git Gud Nov 10 '13 at 9:02
@tbao10 Thanks for fixing the typo. – Git Gud Nov 10 '13 at 9:02
What you have proved is that there is no real number $x$ such that $x^2+x+1=0$. On the other hand, the two complex solutions of $x^2+x+1=0$ do indeed satisfy $x^3=1$. -
if $x^2+x+1=0$ then $(x-1)(x^2+x+1)=0$ thus $x^3-1=0$, thus: $x^3=1$. if ... then .... is not a logical equivalence but only a logical implication. -
$x^2+x+1=0$ is a quadratic. This means it has two answers. $x^3=1$ is a cubic. This means it has three answers. Therefore if you solve for $x$, then one of the answers you get won't fit the original equation.
|
This "extraneous" solution just happens to be $x=1$.
|
http://physics.stackexchange.com/questions/62769/volume-of-gas-at-which-relative-fluctuation-of-gas-density-occurs
| 1,448,729,034,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2015-48/segments/1448398453576.62/warc/CC-MAIN-20151124205413-00155-ip-10-71-132-137.ec2.internal.warc.gz
| 183,577,896
| 17,531
|
# Volume of gas at which relative fluctuation of gas density occurs
I have the following question:
In what volume of gas occurs 10 % relative fluctuation of gas density under pressure of $10^5\text{ Pa}$ and temperature of $293.15\text{ K}$?
I don't understand the topic but I assume this is about ideal gas. Can you please explain this to someone who has just high school physics knowledge?
-
You can compute relative fluctuation of gas volume (which is the same as fluctuation of gas density here) by computing probabilities using entropy (or equivalently Gibbs energy) difference. The following page has explained all the steps.
The final formula would be: $\delta = \langle {(V - V_0)^2 \over V^2} \rangle = {1 \over N}$
In which $N$ is number of atoms which we can compute from ideal gas state equation: $N = {PV \over kT}$
So $\delta = \langle {(V - V_0)^2 \over V^2} \rangle = {k T \over P V}$
$\delta = \langle {(V - V_0)^2 \over V^2} \rangle = 0.01$
$V = {k T \over P \delta} = 4\times 10^{-24} ~\text m^3 = 4000 ~\text{nm}^3$
This volume is small; you need very few atoms to have such a huge fluctuations.
-
Can you please explain in detail where did the original values go? (Or how did we get to 0.01 and 4*10^-24.) The page you are linking to is above my level of understanding :- / – Andrew123321 Apr 30 '13 at 14:04
${(V - V_0) \over V} = 0.1$ So approximately $\langle {(V - V_0)^2 \over V^2} \rangle = 0.01$ – Azad Apr 30 '13 at 14:39
@Andrew123321 May I ask what grade are you in and where you see the question? – Azad Apr 30 '13 at 14:42
I see. Can you please also expand on 4*10^-24? I am an undergraduate student of computer science and this is an excercise from physics class I've taken. – Andrew123321 Apr 30 '13 at 14:52
Well, $k$ is the Boltzmann constant $k = 1.38 \times 10^{-23} J/K$ , $T = 293.15 K$ , $P=10^5 Pa$ , $\delta = 0.01$. Just put them in the last formula. – Azad Apr 30 '13 at 16:22
| 606
| 1,934
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.828125
| 4
|
CC-MAIN-2015-48
|
longest
|
en
| 0.910617
|
# Volume of gas at which relative fluctuation of gas density occurs
I have the following question:
In what volume of gas occurs 10 % relative fluctuation of gas density under pressure of $10^5\text{ Pa}$ and temperature of $293.15\text{ K}$? I don't understand the topic but I assume this is about ideal gas. Can you please explain this to someone who has just high school physics knowledge? -
You can compute relative fluctuation of gas volume (which is the same as fluctuation of gas density here) by computing probabilities using entropy (or equivalently Gibbs energy) difference. The following page has explained all the steps. The final formula would be: $\delta = \langle {(V - V_0)^2 \over V^2} \rangle = {1 \over N}$
In which $N$ is number of atoms which we can compute from ideal gas state equation: $N = {PV \over kT}$
So $\delta = \langle {(V - V_0)^2 \over V^2} \rangle = {k T \over P V}$
$\delta = \langle {(V - V_0)^2 \over V^2} \rangle = 0.01$
$V = {k T \over P \delta} = 4\times 10^{-24} ~\text m^3 = 4000 ~\text{nm}^3$
This volume is small; you need very few atoms to have such a huge fluctuations. -
Can you please explain in detail where did the original values go? (Or how did we get to 0.01 and 4*10^-24.) The page you are linking to is above my level of understanding :- / – Andrew123321 Apr 30 '13 at 14:04
${(V - V_0) \over V} = 0.1$ So approximately $\langle {(V - V_0)^2 \over V^2} \rangle = 0.01$ – Azad Apr 30 '13 at 14:39
@Andrew123321 May I ask what grade are you in and where you see the question? – Azad Apr 30 '13 at 14:42
I see. Can you please also expand on 4*10^-24? I am an undergraduate student of computer science and this is an excercise from physics class I've taken.
|
– Andrew123321 Apr 30 '13 at 14:52
Well, $k$ is the Boltzmann constant $k = 1.38 \times 10^{-23} J/K$ , $T = 293.15 K$ , $P=10^5 Pa$ , $\delta = 0.01$.
|
https://math.stackexchange.com/questions/3014622/use-orthogonality-to-proof-parsevals-identity-for-the-general-fourier-series-wr
| 1,558,536,422,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2019-22/segments/1558232256858.44/warc/CC-MAIN-20190522143218-20190522165218-00083.warc.gz
| 554,651,189
| 33,344
|
# Use orthogonality to proof Parseval's identity for the general Fourier series written as the power spectrum
I need to show that $$\int_{-\pi}^{\pi}\left|\frac{a_0}{2}+\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)\right|^2dx=2\pi\left(\frac{a_0^2}{4}+\frac12\sum_{n=1}^{\infty}\alpha_n^2\right)\tag{1}$$
Just for reference the trigonometric Fourier series is $$f(x)= \frac{a_0}{2} + \sum_{n=1}^{\infty}\left(a_n\cos(nx)+b_n\sin(nx)\right)$$ and the connection between the trigonometric Fourier series and the power spectrum is given by $$a_n=\alpha_n\cos\theta_n$$ $$b_n=\alpha_n\sin\theta_n$$ $$\alpha_n^2=a_n^2+b_n^2$$ $$\tan\theta_n=\frac{b_n}{a_n}$$
So I start by expanding the LHS of $$(\mathrm{1})$$
$$\int_{-\pi}^{\pi}\left\{\frac{a_0^2}{4}+a_0\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)+\left[\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)\right]^2\right\}dx$$ $$= \int_{-\pi}^{\pi}\frac{a_0^2}{4}dx+a_0\int_{-\pi}^{\pi}\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)dx+\int_{-\pi}^{\pi}\left[\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)\right]^2dx$$ $$= 2\pi\frac{a_0^2}{4}+a_0\int_{-\pi}^{\pi}\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)dx$$ $$+\int_{-\pi}^{\pi}\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)\sum_{m=1}^{\infty}\alpha_m\cos(mx-\theta_m)dx\tag{2}$$
I don't know how to proceed any further with this but do I know that for integer $$n\ne m$$ $$\langle\cos(nx)|\cos(mx)\rangle=0$$ but I am struggling to apply the same logic to $$(\mathrm{2})$$ as the cosines have different phase offsets, I am also confused about how to deal with the 2 sums in the second integral.
Does anyone have any advice on how I can complete this proof?
• You have to argue why the infinite sum can be exchanged with the integral, and then, as $\cos$ is periodic, we get the same integrals for each $n$ if we drop $\theta_n$.. – Berci Nov 26 '18 at 17:40
• @Berci Thanks for your reply, I'm not sure why the infinite sum can be exchanged with the integral. In fact, I don't even understand what you mean by 'exchange'. Could you please elaborate on this in an answer? – BLAZE Nov 26 '18 at 19:37
$$\cos(nx-\theta_n)=\cos(nx)\cos(\theta_n)+\sin(nx)\sin(\theta_n)$$
Therefore $$\int_{-\pi}^\pi \cos(nx-\theta_n)\cos(mx-\theta_m)dx={\cos(\theta_n)\cos(\theta_m)\int_{-\pi}^\pi \left(\cos(nx)\cos(mx)\right)dx \quad\text{etc.}}$$
For $$n\ne m$$, $$\int_{-\pi}^\pi \cos(nx)\cos(mx)dx=0$$ and in general $$\int_{-\pi}^\pi \cos(nx)\sin(mx)dx=0$$ Meanwhile $$\int_{-\pi}^\pi \cos^2(nx)dx=\pi$$ This will allow you to complete $$(2)$$.
| 947
| 2,526
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.515625
| 4
|
CC-MAIN-2019-22
|
latest
|
en
| 0.687483
|
# Use orthogonality to proof Parseval's identity for the general Fourier series written as the power spectrum
I need to show that $$\int_{-\pi}^{\pi}\left|\frac{a_0}{2}+\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)\right|^2dx=2\pi\left(\frac{a_0^2}{4}+\frac12\sum_{n=1}^{\infty}\alpha_n^2\right)\tag{1}$$
Just for reference the trigonometric Fourier series is $$f(x)= \frac{a_0}{2} + \sum_{n=1}^{\infty}\left(a_n\cos(nx)+b_n\sin(nx)\right)$$ and the connection between the trigonometric Fourier series and the power spectrum is given by $$a_n=\alpha_n\cos\theta_n$$ $$b_n=\alpha_n\sin\theta_n$$ $$\alpha_n^2=a_n^2+b_n^2$$ $$\tan\theta_n=\frac{b_n}{a_n}$$
So I start by expanding the LHS of $$(\mathrm{1})$$
$$\int_{-\pi}^{\pi}\left\{\frac{a_0^2}{4}+a_0\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)+\left[\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)\right]^2\right\}dx$$ $$= \int_{-\pi}^{\pi}\frac{a_0^2}{4}dx+a_0\int_{-\pi}^{\pi}\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)dx+\int_{-\pi}^{\pi}\left[\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)\right]^2dx$$ $$= 2\pi\frac{a_0^2}{4}+a_0\int_{-\pi}^{\pi}\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)dx$$ $$+\int_{-\pi}^{\pi}\sum_{n=1}^{\infty}\alpha_n\cos(nx-\theta_n)\sum_{m=1}^{\infty}\alpha_m\cos(mx-\theta_m)dx\tag{2}$$
I don't know how to proceed any further with this but do I know that for integer $$n\ne m$$ $$\langle\cos(nx)|\cos(mx)\rangle=0$$ but I am struggling to apply the same logic to $$(\mathrm{2})$$ as the cosines have different phase offsets, I am also confused about how to deal with the 2 sums in the second integral. Does anyone have any advice on how I can complete this proof? • You have to argue why the infinite sum can be exchanged with the integral, and then, as $\cos$ is periodic, we get the same integrals for each $n$ if we drop $\theta_n$.. – Berci Nov 26 '18 at 17:40
• @Berci Thanks for your reply, I'm not sure why the infinite sum can be exchanged with the integral. In fact, I don't even understand what you mean by 'exchange'. Could you please elaborate on this in an answer? – BLAZE Nov 26 '18 at 19:37
$$\cos(nx-\theta_n)=\cos(nx)\cos(\theta_n)+\sin(nx)\sin(\theta_n)$$
Therefore $$\int_{-\pi}^\pi \cos(nx-\theta_n)\cos(mx-\theta_m)dx={\cos(\theta_n)\cos(\theta_m)\int_{-\pi}^\pi \left(\cos(nx)\cos(mx)\right)dx \quad\text{etc.
|
}}$$
For $$n\ne m$$, $$\int_{-\pi}^\pi \cos(nx)\cos(mx)dx=0$$ and in general $$\int_{-\pi}^\pi \cos(nx)\sin(mx)dx=0$$ Meanwhile $$\int_{-\pi}^\pi \cos^2(nx)dx=\pi$$ This will allow you to complete $$(2)$$.
|
https://math.stackexchange.com/questions/1131073/how-many-subsets
| 1,582,829,375,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2020-10/segments/1581875146744.74/warc/CC-MAIN-20200227160355-20200227190355-00090.warc.gz
| 454,502,918
| 32,504
|
# How many subsets?
How many subsets of size 4 of the set S={1,2,....20} contain at least 1 of the elements 1,2,3,4,5?
$${5 \choose 4}{15 \choose 0}+{5 \choose 3}{15 \choose 1}+{5 \choose 2}{15 \choose 2}+{5 \choose 1}{15 \choose 3} + \binom50 \binom{15}4$$
5 "special elements"
15 "regular elements"
• Look at the last two terms again, I think you meant something different. – AlexR Feb 2 '15 at 23:29
• ah yea typos! besides that its ok? – Math Major Feb 2 '15 at 23:30
• ... Contain at least one element. You'll want to drop the last term. The rest is fine. – AlexR Feb 2 '15 at 23:31
• oh righ thanks! – Math Major Feb 2 '15 at 23:33
• You'll want to accept Meelo's answer then since there's nothing more to add. Note that {n \choose k} is superseeded by \binom{n}{k} for syntactical reasons ;) – AlexR Feb 2 '15 at 23:34
Your answer is almost correct - the last term is confusing and also wrong. Consider that any subset of size $4$ containing at least one of the $5$ special elements can be partitioned into a subset $S$ of special elements (of size $1$ through $4$) and a subset $R$ of regular elements (of size $0$ through $3$). Thus, the correct answer would simply be: $${5\choose 4}{15\choose 0}+{5\choose 3}{15\choose 1}+{5\choose 2}{15\choose 2}+{5\choose 1}{15\choose 3}$$ where we take the sum over the possible sizes of $S$ and $R$. This is basically what you have, except without the confusing last term, which would seem to represent the case if $S$ had size $0$ - which is not a case we are interested in - that case represents have no special elements.
The complementary subsets – those of size $4$ which contain none of $1,2,3,4,5$ are $\displaystyle \binom{15}{4}$, and there are $\displaystyle \binom{20}{4}$ subsets of size $4$ in all. Hence the number of subsets of size $4$ which contain at least one of these numbers is equal to
$$\binom{20}{4}-\binom{15}{4}=\frac{20!}{4!\,16!}-\frac{15!}{4!\,11!}=3480.$$
| 639
| 1,938
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
| 4.28125
| 4
|
CC-MAIN-2020-10
|
latest
|
en
| 0.881326
|
# How many subsets? How many subsets of size 4 of the set S={1,2,....20} contain at least 1 of the elements 1,2,3,4,5? $${5 \choose 4}{15 \choose 0}+{5 \choose 3}{15 \choose 1}+{5 \choose 2}{15 \choose 2}+{5 \choose 1}{15 \choose 3} + \binom50 \binom{15}4$$
5 "special elements"
15 "regular elements"
• Look at the last two terms again, I think you meant something different. – AlexR Feb 2 '15 at 23:29
• ah yea typos! besides that its ok? – Math Major Feb 2 '15 at 23:30
• ... Contain at least one element. You'll want to drop the last term. The rest is fine. – AlexR Feb 2 '15 at 23:31
• oh righ thanks! – Math Major Feb 2 '15 at 23:33
• You'll want to accept Meelo's answer then since there's nothing more to add. Note that {n \choose k} is superseeded by \binom{n}{k} for syntactical reasons ;) – AlexR Feb 2 '15 at 23:34
Your answer is almost correct - the last term is confusing and also wrong. Consider that any subset of size $4$ containing at least one of the $5$ special elements can be partitioned into a subset $S$ of special elements (of size $1$ through $4$) and a subset $R$ of regular elements (of size $0$ through $3$). Thus, the correct answer would simply be: $${5\choose 4}{15\choose 0}+{5\choose 3}{15\choose 1}+{5\choose 2}{15\choose 2}+{5\choose 1}{15\choose 3}$$ where we take the sum over the possible sizes of $S$ and $R$. This is basically what you have, except without the confusing last term, which would seem to represent the case if $S$ had size $0$ - which is not a case we are interested in - that case represents have no special elements. The complementary subsets – those of size $4$ which contain none of $1,2,3,4,5$ are $\displaystyle \binom{15}{4}$, and there are $\displaystyle \binom{20}{4}$ subsets of size $4$ in all. Hence the number of subsets of size $4$ which contain at least one of these numbers is equal to
$$\binom{20}{4}-\binom{15}{4}=\frac{20!}{4!\,16!}-\frac{15!}{4!\,11!
|
}=3480.$$
|
https://stats.stackexchange.com/questions/467363/what-is-the-distribution-of-max-min-for-a-gaussian-distribution
| 1,725,754,297,000,000,000
|
text/html
|
crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00841.warc.gz
| 521,644,740
| 40,988
|
# What is the distribution of max-min for a Gaussian distribution
For a process N(t), where at any instance of t=T0, the distribution of N(T0) is Gaussain with mu=0:
What is the distribution of max(N(t))-min(N(t))?
From my simulation, it has some non-zero positive mean value and a waveform that looks like Gaussian but has a longer tail on the right side:
• @JarleTufto that looks like an answer in itself. Commented May 19, 2020 at 15:25
• Does this answer your question? Independence of Sample mean and Sample range of Normal Distribution Commented May 19, 2020 at 18:23
• This seems quite similar to the studentized-range distribution (see en.wikipedia.org/wiki/…), but I'm not quite confident enough to put this in an answer ... Commented May 20, 2020 at 3:26
Working with the standard normal case for simplicity, the joint density of the minimum and maximum is $$f_{X_{(1)},X_{(n)}}(x_1,x_2)=\frac{n!}{(n-2)!}\phi(x_1)\phi(x_2)[\Phi(x_2)-\Phi(x_1)]^{n-2},$$ for $$x_2>x_1$$. The joint density of the linear transformation \begin{align} Y_1&=X_{(n)}-X_{(1)}, \\ Y_2&=X_{(n)} \end{align} becomes \begin{align} f_{Y_1,Y_2}(y_1,y_2) &=f_{X_{(1)},X_{(n)}}(y_2-y_1,y_2) \\&=\frac{n!}{(n-2)!}\phi(y_2-y_1)\phi(y_2)[\Phi(y_2)-\Phi(y_2-y_1)]^{n-2} \end{align} for $$y_1>0$$. Hence, the marginal density of $$Y_1$$ is \begin{align} f_{Y_1}(y_1) &=\int_{-\infty}^\infty f_{Y_1,Y_2}(y_1,y_2)dy_2 \\&=\frac{n!}{(n-2)!}\int_{-\infty}^\infty\phi(y_2-y_1)\phi(y_2)[\Phi(y_2)-\Phi(y_2-y_1)]^{n-2}dy_2. \end{align} At least for $$n=2$$ and $$n=3$$ but perhaps also for larger $$n$$, this integral has an analytic solution. Resorting to numerical integrations using the R code
dminmax <- function(y1, n) {
n*(n-1)*res$value } dminmax <- Vectorize(dminmax) curve(dminmax(x,5), add) produces the plot • I'm a bit confused in places notationally if you are trying to provide a general solution or if you are treating an$n=2\$ special case. Commented Mar 28 at 19:35
| 685
| 1,955
|
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0}
| 3.546875
| 4
|
CC-MAIN-2024-38
|
latest
|
en
| 0.81299
|
# What is the distribution of max-min for a Gaussian distribution
For a process N(t), where at any instance of t=T0, the distribution of N(T0) is Gaussain with mu=0:
What is the distribution of max(N(t))-min(N(t))? From my simulation, it has some non-zero positive mean value and a waveform that looks like Gaussian but has a longer tail on the right side:
• @JarleTufto that looks like an answer in itself. Commented May 19, 2020 at 15:25
• Does this answer your question? Independence of Sample mean and Sample range of Normal Distribution Commented May 19, 2020 at 18:23
• This seems quite similar to the studentized-range distribution (see en.wikipedia.org/wiki/…), but I'm not quite confident enough to put this in an answer ... Commented May 20, 2020 at 3:26
Working with the standard normal case for simplicity, the joint density of the minimum and maximum is $$f_{X_{(1)},X_{(n)}}(x_1,x_2)=\frac{n!}{(n-2)! }\phi(x_1)\phi(x_2)[\Phi(x_2)-\Phi(x_1)]^{n-2},$$ for $$x_2>x_1$$. The joint density of the linear transformation \begin{align} Y_1&=X_{(n)}-X_{(1)}, \\ Y_2&=X_{(n)} \end{align} becomes \begin{align} f_{Y_1,Y_2}(y_1,y_2) &=f_{X_{(1)},X_{(n)}}(y_2-y_1,y_2) \\&=\frac{n!}{(n-2)! }\phi(y_2-y_1)\phi(y_2)[\Phi(y_2)-\Phi(y_2-y_1)]^{n-2} \end{align} for $$y_1>0$$. Hence, the marginal density of $$Y_1$$ is \begin{align} f_{Y_1}(y_1) &=\int_{-\infty}^\infty f_{Y_1,Y_2}(y_1,y_2)dy_2 \\&=\frac{n!}{(n-2)!}\int_{-\infty}^\infty\phi(y_2-y_1)\phi(y_2)[\Phi(y_2)-\Phi(y_2-y_1)]^{n-2}dy_2. \end{align} At least for $$n=2$$ and $$n=3$$ but perhaps also for larger $$n$$, this integral has an analytic solution.
|
Resorting to numerical integrations using the R code
dminmax <- function(y1, n) {
n*(n-1)*res$value } dminmax <- Vectorize(dminmax) curve(dminmax(x,5), add) produces the plot • I'm a bit confused in places notationally if you are trying to provide a general solution or if you are treating an$n=2\$ special case.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.