[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 3 KB, 242x148, Untitled.png [View same] [iqdb] [saucenao] [google]
9778126 No.9778126 [Reply] [Original]

Can someone explain where separation of variables comes from, why does the first statement result in the second occurring?

>> No.9778130

>>9778126
You make a guess that its possible then solve the equation from there

>> No.9778150

>>9778130
>You make a guess that its possible then solve the equation from there
This can't be real.

>> No.9778171

It's a gradient. You take the partial derivative with respect to every variable while trating others as constants.

>> No.9778214
File: 413 KB, 510x652, 1478389217612.gif [View same] [iqdb] [saucenao] [google]
9778214

>>9778150
It is, at least from the engineering perspective it looks like a guess.
From a geometric (i.e. non retarded) point of view, if you're on separable coordinates then the Laplacian operator decomposes into tensor factors, hence solutions are naturally also tensor factors. Uniqueness of solutions then means that you can stop looking.

>> No.9778217

>>9778150
It's legitimately a thing. One of my textbooks called making a guess and checking it a "heuristic approach"

>> No.9778255

>>9778171
why can you even treat it as a constant though?

>> No.9778544

>>9778171
>gradient

>> No.9778562

>>9778214
Where could I read more about this? I've always been curious about the conditions necessary for separation of variables to work but most of my colleagues just don't really care.

>> No.9778578

>>9778562
https://en.wikipedia.org/wiki/Orthogonal_coordinates

>> No.9778593

>>9778578
Thanks!

>> No.9778595
File: 298 KB, 640x480, 1527120497530.png [View same] [iqdb] [saucenao] [google]
9778595

>>9778126
The motivation is:
1) hey this PDE is fucking hard and we don't know how to solve it in general.
2) let's get some traction by making simplyfying assumptions and see what happens
3) (insert seperation of variables here)
4) hey, we get a solution of equations that actually solve the equation
5) hey, the equation is linear, so we can add solutions together to get new solutions
6) hey, the solutions are orthogonal w.r.t an inner product (like vectors), and span the space of general solutions (ie a vectorspace basis)
7) hey, the inner product can be used to project an arbitrary function onto the basis functions (like projecting a vector onto a basis to measure it's components
8) hey we have a general solution, and uniqueness arguments say we can stop here.

>> No.9778600

>>9778214
What's a tensor? We never learned any of this stuff.

>> No.9778609

>>9778600
Easiest way to put it is tensors are a generalization of vectors and matrices, but there's a bit more to it than that.

>> No.9778615

>>9778600
tensors generalize scalars (algebraically single numbers), vectors (single-indexed lists of numbers), matrices (double-indexed lists of numbers), etc.; the linear operations that relate these kinds of objects; and how they transform (taking into account covariance and contravariant transformations).

For an engineering point of view, typically when somebody says "tensor" they mean "matrix". (older) mathematicians typically define tensors by the way they transform, then ask themselves if they can find examples of such transformations.

>> No.9778633

>>9778615
Admittingly, these are pretty much non-answers to the uninitiated.

Personally, I take the approach that you need to first have a solid understanding of vectors and vectorspaces. You need to know what bases are, why they are important, and the difference between vectors, basis vectors, and components. Then you need to study how to transform vectors and matrices linearly with (invertible) matrices, and the difference between covariant and contravariant components and vectors. Then study what a metric is. Finally generalize the math to arbitrary numbers of co- and contravariant indices and appeal to the notion of tensors, how they combine, and how they transform.

In short, in my own (probably flawed) words, tensors are the most general linear objects, with things like scalars, vectors, and matrices (all vectors in their own right) being examples.

I'll flesh out each detail a little bit at a time.

>> No.9778634

>>9778615
I think I need to brush up on my linear algebra now.

>> No.9778644
File: 282 KB, 438x897, 1491237591603.png [View same] [iqdb] [saucenao] [google]
9778644

>>9778609
>>9778615
None of these are related to tensor factors of derivation modules, of which functions with differential operators are an example of. This is one of those cases where different math objects are assigned the same name. Please don't confuse >>9778562 and >>9778600 with your ignorance.

In general in a monoidal category C, you can equip it with a tensor structure, which can be treated as a map C x C → C that is natural with respect to the monoidal structure. The universal property is that, if an object V is isomorphic in C to a tensor product if objects W and Z, then endomorphisms of V are isomorphic to a tensor product of endomorphisms on W and Z.
Let C be the category of derivation modules over manufolds, I.e. its objects are modules of functions M on a manifold X equipped with an action by a derivation algebra A of differential operators, and its morphisms are modules morphisms. Then C is a symmetric monoidal tensor category (check!). If X has separable coordinates, then M is a tensor factor of modules of functions and the a-evaluation endomorphism for a in A has the universal property.
>>9778633
Again, this has absolutely nothing to do with PDE's, which lies beyond the scope of linear algebra. Please stop trying to sound smarter than you actually is.

>> No.9778661

>>9778644
You seem to know about the abstract algebraic structure. I know groups, fields, and vectorspaces, but that's it. It seems like you described differential geometry. Aren't the concepts related? Can you go into a little more detail?

>> No.9778665

>>9778255
because it only works for things like paremetrized functions where variables don't overlap

>> No.9778669

>>9778126
If you were to write the solution as a power series in x and y, every term of the series would be a function x of times a function of y.

The only real issue with the method is proving that that all of the solution pairs form a basis for the solution space.

>> No.9778675

>>9778126
because separation of variables is the easiest way to do things

>> No.9778676
File: 317 KB, 442x472, Touchthekitsune_7ec6f6ececd730cd6a211dc7af41bc1b.png [View same] [iqdb] [saucenao] [google]
9778676

>>9778661
>It seems like you described differential geometry. Aren't the concepts related? Can you go into a little more detail?
Differential geometry can be considered as the details of the category C. In fact literally any branch of mathematics can be considered as "the details"of some category, subcategory theory. For instance, if you have existence statements for C through the universal property, Yoneda or something else, differential geometry can tell you exactly what those objects are constructively.
On just the level of tangent/cotangent spaces, sure, these concepts are related in the sense that tensors can be defined as rank-(p,q) sections of a vector bundle. Sure, linear algebraic concepts are utilized, but there is just so much more structure to the objects you're studying that by just giving a linear algebraic answer to a question about it is grossly incomplete.

>> No.9778681

>>9778676
>subcategory theory
even category theory*

>> No.9778774

>>9778644
>>9778676
Where can I learn more about linear algebra, I only have an introductory book that ends on quadratic residues.

>> No.9778779

>>9778774
Lang or Hoffman & Kunze

>> No.9778797

>>9778126
Can we construct any solution from the separated solutions? Then the answer is no (only in some special cases), if you get a complete set of orthogonal functions like >>9778595 said, then any function of the type X(x)Y(y) is a superposition of the separated solutions.

>> No.9778803

>>9778644
I don't understand why you use the words "tensor" and "monoidal" as if they refer to different things

>> No.9778811
File: 357 KB, 675x596, 1502332275043.png [View same] [iqdb] [saucenao] [google]
9778811

>>9778803
>I don't understand
I know. That much is obvious.

>> No.9778911

brainlets only solve this analytically.. be chad do finite element method

>> No.9778954

When you have linear partial differential equations, you can construct a vector space with the possible solutions. This space is usually infinitely dimenesional so you need an infinite number of solutions to construct a "basis". Separation of variables almost always let's you separate your equation into two ordinary differential equations. With this and your boundry conditions, you can (at least for the typical equations) you can construct a set which can let you do fourier analysis on it. The idea that it's just a method that usually works from rigorous principles, but that's why it's important to learn heuristics and problem solving methods because it's better to solve some cases im particular and then try to attack the problem in a different manner.

>> No.9779009

>>9778811
>le smug deflecting animu face.jpg

>> No.9779013
File: 116 KB, 493x363, smug23.jpg [View same] [iqdb] [saucenao] [google]
9779013

>>9779009
There's nothing to deflect, sweetie. Get back to me once you've learned category theory and formed an actual argument.

>> No.9779048

>>9778911
Dont have a class in that as a CompE student

>> No.9779054

>>9778126
People will sometimes call that an "ansatz". Basically, you guess what the result is (or its form) and show that it satisfies the initial problem.
In this specific case there is a deeper reason for why it works (probably that is always so), but when it was first developed it was just a guess or a case where the researcher try working backwards in a sort of try and error, adjusting the ansatz until it works.
This is common in solving ODE's and PDE's. There are large classes of equations that have deep ways of finding solutions, but the more complicated they get (and oh boy PDE's can get nasty) the more common it is to simply see an ansatz and that's it. It's messy and someone has to clean it up eventually.

>> No.9779121

>>9778150
It's not quite a guess. You can partially intuit why this might be the case, though more easily when dealing with problems with time derivatives. For instance, if you consider the distribution of an fixed quantity of material in an infinitely closely packed slice of x (often modeled with a delta function). Here are some things we would anticipate:

1. The diffusion of the quantity n is symmetric and follows some kind of distribution. This distribution must also obey a conservation law (implying that if we take lim -inf to inf of the material distribution it will add up to the original amount). One distribution we know that follows this rule is the gaussian distribution.
2. This will decay and broaden with time, meaning that we expect our value of sigma in the normal distribution to increase with time. This can be expressed as any f(t) that strictly increases, and it's pretty easy to see that it will always work, but let's go with just t for simplicity.

The simplest way we can satisfy this is to write an equation of the form (1/sqrt(pi*k*t)) * exp(-x^2/kt). Note that this function is proportional to some function of t, since we can pull the t out of the exponential and the exponent. It satisfies the n(x,t) = X(x)T(t) condition for separation of variables.

The actual solution matches our guess if k = 4D, where D is the diffusion coefficient. Conceptually, the ability to use separation of variables is a consequence of how our variables interact in the problem. In this case, it is related to the fact that normal distributions remain as normal distributions under convolution in time. Basically, if you redistribute all of the particles in a normal distribution about their current location, the resulting distribution is still gaussian. Thus, the t is pretty easy to pull out. Anywhere where you would anticipate a normal distribution, exponentials, periodic functions, or hyperbolas, separations of variables is usually worth trying.

>> No.9779194

this is the power of "mathematics"
woah