- 21
- 26 129
Franks Mathematics
เข้าร่วมเมื่อ 10 มิ.ย. 2021
In this channel you will find videos about selected topics from mathematics - mostly from numerical and applied mathematics. Cheers!
How to compile and include CBC from Coin-or using VS - UPDATE: Release/64bit and v2.10.12
First video: th-cam.com/video/QpfIkVDKxY8/w-d-xo.html
Linker:
libCbc.lib;
libCbcSolver.lib;
libCgl.lib;
libClp.lib;
libCoinUtils.lib;
libOsi.lib;
libOsiCbc.lib;
libOsiClp.lib;
libOsiCommonTest.lib;
C/C++:
\cbc_new_test\Clp\src\OsiClp;
\cbc_new_test\Cgl\src\CglTwomir;
\cbc_new_test\Cgl\src\CglSimpleRounding;
\cbc_new_test\Cgl\src\CglResidualCapacity;
\cbc_new_test\Cgl\src\CglRedSplit2;
\cbc_new_test\Cgl\src\CglRedSplit;
\cbc_new_test\Cgl\src\CglProbing;
\cbc_new_test\Cgl\src\CglPreProcess;
\cbc_new_test\Cgl\src\CglOddWheel;
\cbc_new_test\Cgl\src\CglOddHole;
\cbc_new_test\Cgl\src\CglMixedIntegerRounding2;
\cbc_new_test\Cgl\src\CglMixedIntegerRounding;
\cbc_new_test\Cgl\src\CglLiftAndProject;
\cbc_new_test\Cgl\src\CglLandP;
\cbc_new_test\Cgl\src\CglKnapsackCover;
\cbc_new_test\Cgl\src\CglGomory;
\cbc_new_test\Cgl\src\CglGMI;
\cbc_new_test\Cgl\src\CglFlowCover;
\cbc_new_test\Cgl\src\CglDuplicateRow;
\cbc_new_test\Cgl\src\CglCommon;
\cbc_new_test\Cgl\src\CglCliqueStrengthening;
\cbc_new_test\Cgl\src\CglClique;
\cbc_new_test\Cgl\src\CglBKClique;
\cbc_new_test\Cgl\src\CglAllDifferent;
\cbc_new_test\Cbc\src\OsiCbc;
\cbc_new_test\Osi\src\OsiXpr;
\cbc_new_test\Osi\src\OsiSpx;
\cbc_new_test\Osi\src\OsiMsk;
\cbc_new_test\Osi\src\OsiGrb;
\cbc_new_test\Osi\src\OsiGlpk;
\cbc_new_test\Osi\src\OsiCpx;
\cbc_new_test\Osi\src\OsiCommonTest;
\cbc_new_test\Osi\src\Osi\;
\cbc_new_test\CoinUtils\src\;
\cbc_new_test\Clp\src\;
\cbc_new_test\Cgl\src\;
\cbc_new_test\Cbc\src\;
Linker:
libCbc.lib;
libCbcSolver.lib;
libCgl.lib;
libClp.lib;
libCoinUtils.lib;
libOsi.lib;
libOsiCbc.lib;
libOsiClp.lib;
libOsiCommonTest.lib;
C/C++:
\cbc_new_test\Clp\src\OsiClp;
\cbc_new_test\Cgl\src\CglTwomir;
\cbc_new_test\Cgl\src\CglSimpleRounding;
\cbc_new_test\Cgl\src\CglResidualCapacity;
\cbc_new_test\Cgl\src\CglRedSplit2;
\cbc_new_test\Cgl\src\CglRedSplit;
\cbc_new_test\Cgl\src\CglProbing;
\cbc_new_test\Cgl\src\CglPreProcess;
\cbc_new_test\Cgl\src\CglOddWheel;
\cbc_new_test\Cgl\src\CglOddHole;
\cbc_new_test\Cgl\src\CglMixedIntegerRounding2;
\cbc_new_test\Cgl\src\CglMixedIntegerRounding;
\cbc_new_test\Cgl\src\CglLiftAndProject;
\cbc_new_test\Cgl\src\CglLandP;
\cbc_new_test\Cgl\src\CglKnapsackCover;
\cbc_new_test\Cgl\src\CglGomory;
\cbc_new_test\Cgl\src\CglGMI;
\cbc_new_test\Cgl\src\CglFlowCover;
\cbc_new_test\Cgl\src\CglDuplicateRow;
\cbc_new_test\Cgl\src\CglCommon;
\cbc_new_test\Cgl\src\CglCliqueStrengthening;
\cbc_new_test\Cgl\src\CglClique;
\cbc_new_test\Cgl\src\CglBKClique;
\cbc_new_test\Cgl\src\CglAllDifferent;
\cbc_new_test\Cbc\src\OsiCbc;
\cbc_new_test\Osi\src\OsiXpr;
\cbc_new_test\Osi\src\OsiSpx;
\cbc_new_test\Osi\src\OsiMsk;
\cbc_new_test\Osi\src\OsiGrb;
\cbc_new_test\Osi\src\OsiGlpk;
\cbc_new_test\Osi\src\OsiCpx;
\cbc_new_test\Osi\src\OsiCommonTest;
\cbc_new_test\Osi\src\Osi\;
\cbc_new_test\CoinUtils\src\;
\cbc_new_test\Clp\src\;
\cbc_new_test\Cgl\src\;
\cbc_new_test\Cbc\src\;
มุมมอง: 76
วีดีโอ
Optimizing a crank gear
มุมมอง 473 หลายเดือนก่อน
In this video I will show you how to derive an optimization problem for optimizing a crank gear and make it accessible for a numerical solver. This video is not about existence and uniqueness, but rather about "how to get things done" - the implementation is done in Matlab and I will present the implementation at the end of this video. 0:00 Introduction 1:46 Geometric constraints 5:29 What do w...
Regularization Methods - Part 2: Tikhonov Regularization
มุมมอง 6917 หลายเดือนก่อน
In the second part of this series we will take a closer at the Tikhonov Regularization technique in infinite dimensional vector spaces. We will motivate this type of regularization and also show convergence. Part 1: studio.th-cam.com/users/videoYRo0H7BtYpE/edit Towards a weak formulation of Poisson's equation: studio.th-cam.com/users/video_3q8cw58lAI/edit
Towards a weak formulation of Poisson's equation
มุมมอง 229ปีที่แล้ว
In this video we will establish a weak formulation of Poisson's equation. 0:10 - Classical solution 2:32 - Space of test functions 4:25 - Weak derivative 15:16 - Sobolev spaces 26:00 - Integration by parts 28:32 - Weak formulation 39:06 - Outlook, very weak solution
Regularization Methods - Part 1: Introduction to Inverse Problems
มุมมอง 1.3Kปีที่แล้ว
In this video I will give you an introduction to Inverse Problems and show some examples. In the end we also do some math to get this series started. Optimal control of a double pendulum: th-cam.com/video/IoAmbwSkcYs/w-d-xo.html 0:00 - Introduction 0:30 - Forward and Backward Problem 3:41 - Shape reconstruction using shadows 10:32 - Computerized tomography 12:10 - Optimal control of gases 13:23...
Hamel basis versus Schauder basis
มุมมอง 2.5K2 ปีที่แล้ว
In this video we talk about the concept of a Hamel basis and Schauder basis in infinite dimensional vectorspaces. 0:14 - Basis in finite dimensional vectorspaces 1:20 - Hamel basis 14:05 - Schauder basis Corrected version, thanks to Evgeniy Evgeniy (th-cam.com/channels/0hqrF8d_F_0zj9fmUY6QFw.html)
Finite Differences for elliptic PDEs - Part 3: Upwind discretization
มุมมอง 2912 ปีที่แล้ว
In Part 3 of our series we introduce you to the upwind discretization for a convection-diffusion-equation. Part 1: The basis in one dimension th-cam.com/video/biTuD_kZDrQ/w-d-xo.html Part 2: Neumann boundary conditions th-cam.com/video/ULLUjg8syk0/w-d-xo.html 0:00 - Introduction 1:30 - Convection-Diffusion equation 11:24 - Upwind discretization
Compiling a parallel version of the CBC solver from COIN-OR for Windows and Visual Studio
มุมมอง 1.1K2 ปีที่แล้ว
In this video we will explain how to compile a parallel version of the CBC solver using pthreads under Windows using Visual Studio. Git-Repository (including the compiled files): github.com/FranksMathematics/CBC_ReleaseParallel_Win86 First part: (How to compile and include CBC from Coin-or using Windows and Visual Studio) th-cam.com/video/QpfIkVDKxY8/w-d-xo.html GitHub - GerHobbelt/pthread-win3...
Finite Differences for elliptic PDEs - Part 2: Neumann boundary conditions
มุมมอง 2.3K2 ปีที่แล้ว
In the second part of this series we will deal with Neumann boundary conditions and how we can numerically tackle them. Part 1: th-cam.com/video/biTuD_kZDrQ/w-d-xo.html 0:00 - Introduction and Recap 1:02 - Dirichlet and Neumann boundary conditions 4:59 - Discretization of the boundary conditions 6:51 - Discretization of the whole system 13:41 - Numerical example
How to compile and include CBC from Coin-or using Windows and Visual Studio (for Beginners)
มุมมอง 2.7K2 ปีที่แล้ว
There is an update on how to build it in Release/x64 using the v2.10.12 (latest release at October 2024). You find this information here: th-cam.com/video/mMutgRhZKlQ/w-d-xo.html In this video I will explain in detail how to compile the CBC solver from the Coin-or foundation and how to include this into your project. We will use Windows and Visual Studio for this purpose. Coin-OR: www.coin-or.o...
Finite Differences for elliptic PDEs - Part 1: The basics in one dimension
มุมมอง 9192 ปีที่แล้ว
In this video we will introduce you to the method of finite differences for solving elliptic partial differential equations. Part 1 deals with the basis in one dimensions Conjugate Gradient method: th-cam.com/video/Z3I831eiwrc/w-d-xo.html Preconditioned Conjugate Gradient method: th-cam.com/video/gsMJxg9-joo/w-d-xo.html Splitting methods: th-cam.com/video/b93IjMHrrjg/w-d-xo.html 0:00 - Poisson ...
The Semismooth Newton Method
มุมมอง 5892 ปีที่แล้ว
In this video we will introduce you to the concept of semismoothness and the resulting semismooth Newton method. This method is applied in the context of constrained optimization problems. 0:00 - Introduction 1:39 - Constrained Optimization Problems 7:17 - Reformulation as fixed point equation 14:02 - Semismoothness 23:04 - Semismooth Newton Method
Numerical linear algebra: Preconditioned Conjugate Gradient method
มุมมอง 2.3K2 ปีที่แล้ว
In this small video we will introduce you to the preconditioned conjugate gradient method. For a simple preconditioner we use the Jacobi method. Numerical linear algebra: Understanding splitting methods: th-cam.com/video/b93IjMHrrjg/w-d-xo.html Numerical linear algebra: Conjugate gradient method: th-cam.com/video/Z3I831eiwrc/w-d-xo.html 0:00 Introduction 1:03 Conjugate Gradient method 1:46 Star...
Numerical linear algebra: Conjugate Gradient method
มุมมอง 7K2 ปีที่แล้ว
In this video I will present you the Conjugate Gradient method, a popular method used in optimization and numerical linear algebra. 0:00 - Introduction 2:07 - Relation to optimization 3:29 - Digging into linear algebra 12:06 - A-Orthogonality 15:03 - Derivation of the algorithm 22:54 - Outro
Numerical linear algebra: Understanding splitting methods
มุมมอง 7322 ปีที่แล้ว
In this video I will show you how to put splitting methods like the Jacobi method, Gauss-Seidel method and SOR method into a general framework and how to understand them from an optimization point of view. 0:00 - Introduction 1:33 - Derivation 7:05 - Deeper understanding using optimization techniques 16:37 - Preparation for examples 17:29 - Jacobi method 21:43 - Gauss-Seidel method 27:22 - SOR ...
Optimal control of a double pendulum using the fmincon function from MATLAB
มุมมอง 2.1K2 ปีที่แล้ว
Optimal control of a double pendulum using the fmincon function from MATLAB
Introduction to the explicit Euler method for ordinary differential equations
มุมมอง 1042 ปีที่แล้ว
Introduction to the explicit Euler method for ordinary differential equations
The Math behin AI - Part 3: Backpropagation
มุมมอง 623 ปีที่แล้ว
The Math behin AI - Part 3: Backpropagation
How to compute the derivative of composition of functions with two or more arguments.
มุมมอง 873 ปีที่แล้ว
How to compute the derivative of composition of functions with two or more arguments.
The Math behind AI - Part 2: Stochastical and Mini-Batch Gradient Method
มุมมอง 1703 ปีที่แล้ว
The Math behind AI - Part 2: Stochastical and Mini-Batch Gradient Method
The Math behind AI - Part 1: Setting up the optimization problem.
มุมมอง 2863 ปีที่แล้ว
The Math behind AI - Part 1: Setting up the optimization problem.
Worked fine for me. Thanks for the explanation 👍
I got error when run F5 in the last step (Unable to start program 'C:\Users\firda\OneDrive\Desktop\CBC Test\Debug\CBC Test.exe'). The system cannot find the file specified. In addition, I tried to copy and paste the cbc.exe from 'C:\Users\firda\OneDrive\Desktop\coin-or\Cbc\MSVisualStudio\v17\Debug' and rename it to CBC Test.exe, but the problem persists. Please help me
The cbc.exe ist not our builded exe! When compiling the solution "cbc" the cbc.exe in 'C:\Users\firda\OneDrive\Desktop\coin-or\Cbc\MSVisualStudio\v17\Debug' is built. We, however, want to include cbc into our projects (using the lib files and so on). Regarding your problem: Do you have any problems compiling the solution "CBC Test"? If there aren't any error, a executable should appear in CBC Test/Debug/. I suppose there are some compilation errors. Maybe there are some issues with your shortcuts... Try to compile by rightclicking on the solution, then choose "Build" or "Compile". In addition I recommend to follow the newer guide, which is linked in the video description. The way I include all the header files in this video here is not the best way. In the newer video I show how to do it properly. And this video here is two years old, so there might be some changes, which causes your problems. If you have any more question, I'd be happy to help.
@@franksmathematics9779 I very appreciate your respond, thanks Franks. I will see your other videos
Thanks
Frank, I built the binaries for the current master using v17 in debug for x64 No Problem. However after copying the headers and libs to my c++ console app its nothing but errors, such as Error C1083 Cannot open include file: 'oin/CbcModel.hpp': No such file or directory ConsoleCutStockSolver C:\Users\utilisateur\source epos\ConsoleCutStockSolver\ConsoleCutStockSolver.cpp . I have followed your method for at least 4 attempts now but the same error keep poping up. Can you make an updated video using the current versions on cbc and visual studio??
This sound like a linker error - Does VS know where to find the binaries and the h-files? Without the header files you cannot use the binary as the linker does not where to "enter". I am not sure if something has changed in the newest version, but the procedure (building the libary and linking it correctly into your project) should be the same, as it is a standard procedure. However, I will check on this - hopefully on this weekend, because after that I am traveling for a month. But I am glad that you where able to build the v17 version - this is the first step!
@@franksmathematics9779 Thanks Franck, that would be great if could verify this. I am using the VS2002 community edition and had no problem building the binaries, it is just when i include them in my project that I get errors.
@@Megapixelrealestate I had no trouble using the newest version v2.10.12 in Release Mode using x64, using VS2022 Community (17.11.2). The process is nearly the same - however I just chosed a different approach to link the h-Files to VS. You can find the update here: th-cam.com/video/mMutgRhZKlQ/w-d-xo.html I hope this helps you! Best, Frank
@@franksmathematics9779 Thanks for verifying Franck. The problems I am getting must be related to bugs in my code, at least I know its not a version problem.
@@Megapixelrealestate Good luck hunting down the bugs!
It would be greatly appreciated if you could create a 64-bit parallel version of CBC 2.10.12 for Windows. I was unable to get the recipe mentioned on the GitHub page of CBC to work.
Frank, I followed your video to compile cbc but using the master on the Release /2.10.12 page. i am trying to build a x64 solution in release but getting all sorts of errors and just cant get visual studio configured to work without errors. Could you please create a video using the up to date branch and configured in x64 release in vs 2022 ? Is there a reason why you made this video using 32 bit? Is it less prone to errors?
There was no specific reason why I picked the 32bit version. Maybe just for fun? I really dont remember it anymore :D If you are free to choose, pick the 64 bit version! However it is weird that you can compile it in Debug, but not in release. Have you checked on your linker details (see the video for details)? Are they different in Debug and Release?
@@franksmathematics9779 I can in fact compile cbc in release without errors, the errors arise in my own project to which I have added the compiled binaries.
I downloaded the source code zip file for the release 2.10.8 which you cloned, however it only contains v9, v10 and v14, its missing v9alt, v10alt and v16. Whats going on??
Oh that is weird - i just checked on the master branch - there v10, v14, v16 and even v17 are present. But you are right: In the stable/2.10 there are some folders missing - but v17 is there. So you can either switch to master branch, or use the v17 version with the newest VS version. Or maybe try the v16 version from master branch with the stable/2.10 branch - maybe this is working...
Wow! This is a great video explaining in simple terms how to set up cbc. Thank you Frank for the effort you put in to produce this video.
Thanks for this video! I may be wrong, but isn't the multiplication by d_i inside the sum missing in the Gram-Schmidt algorithm ?
Great Video ! Could you maybe give a resource (maybe a book) on the basics of Regularization with a nice intro with Linear systems as you did. Specifically where i can find the Definition after Hadamard and some Analysis on examples ?
Thanks a lot for this video. I found it very useful! I have just a small question: at 13:24 where you introduce the notation for the inverse problem, it states that `u is an element of Y` underneath `min`. Is this a typo? I would have thought it should be `u is an element of U`. Thanks again!
Yes, you are correct, this is a typo. u in U representes the constraints which should apply to u, therefore it should be \min_\limits_{u \in U}. Thanks for pointing this out!
Thanks for the explanation. I am a beginner for this, but I would like to know, what the matrix P is here. Could you please explain it more? Thanks.
Hi, the matrix P is our preconditionier! Around minute 12 I give an example how to use the Jacobi method as a preconditioner, which will result in a certain matrix P. The idea/derivation of the matrix P was given early in the video. The choice of an efficient P (in terms of: less iterations in the CG method) however is not that easy, since this heavily depends on the underlying problem. You can easily derive different matrices P for different splitting methods like Gauss-Seidel, SSOR and so on. I hope this helps in understanding the idea - if you have any more question, please do not hesitate!
@@franksmathematics9779 Thanks for your explanation. I am now somehow able to apply it to my computations :D Now, I have another question: in your example, did you apply the preconditioner to both z^k and z^(k+1)? If yes, does it mean that the number of steps of the Jacobi method (minute 14:05) applies to both z^k and z^(k+1)? Look forward to hearing from you. Thank you very much.
@@KebutuhanKirimFile2 Glad you made it work! The number of Jacobi-steps is denoted by l, which is in the definition of the matrix P. So the formal correct answer to your question would be: "You only apply the matrix P to z_k once to obtain z_{k+1}!". In practice you will not compute the inverse of P, but rather apply l steps of the Jacobi method to z_k and call the output z_{k+1}! Best, Frank
@@franksmathematics9779 Thanks for your reply but maybe I asked it wrongly. Let me clarify it once again. First question: So, in your example, for the value of I = 5, it gives a total of 130 iterations of CG. Does it mean that the Jacobi loops here are in fact performed 5 x 130 = 650 times? Second question: how's about the actual CPU time for different values of I? I can understand that increasing the value of I will decrease the number of iterations of CG, but in fact doing the Jacobi loops for larger I values will also consume time (memory accesses, arithmetic operations, etc.)? Look forward to hearing from you. Thank you very much!
@@KebutuhanKirimFile2 Yes you are right, multiplying (number of Jacobi iterations in each step) * (number of CG steps) gives the final amount of performed Jacobi iterations. Regaring your second question: I do not have the data available anymore. There should be some sort of "sweet spot" where the computation time is minimal. Maybe I will redo the calculation and measure the CPU time - this sound like an interesting question. Do you have made any research into this topic/area and can provide the results?
It would be great if you could explain why we need the idea of eigen-space when we use A-orthogonality to construct the space where x^{\star} definition is based on.
Hi, you could start directly with the definition of A-orthogonality for sure - but I thought it is helpful to actually show where the idea of A-orthogonality arises here (starting with the optimization problem -> going to the contour lines which are ellipses -> they can be described in the eigen space of A -> put this into a definition -> The contour lines of f can be described as a circle in the eigen space of A - roughly spoken). For the construction of the CG method it is not needed. Best, Frank
Hi, awesome explanation. When we'll get the next part?
Thank you! Part 3 is already structured in my head, but I haven't had the time to start working on it. So I am afraid it might take some time...
Thank you Frank! It helped a lot. Could you please explain how to set up OsiGrbSolverInterface?
Thank you Frank! It helped a lot. Could you please explain how to set up OsiGrbSolverInterface?
Thank you for your detailed video! Came in just in time when I started my master thesis on inverse problems in spectroscopy (reversing the filtering effects of the optical system). Is there a reason for keeping the 1/2 on the optimization problems? Minimizing 2norm(Ax-y)^2 times 1/2 should be the same as minimizing 2norm(Ax-y)^2 right?
There is no mathematical reason for it. The only reason is that the 1/2 cancels with the factor 2 coming from the norm when you compute the derivate. Since one mainly works with the derivate, this makes some terms "look nicer". Good luck with your master thesis - you picked an interesting topic!
Note that I made a small error: At 8:40 one obtains a set B which is a maximal element of P! Please ignore the last sentence on this slide please.
Please note: In the slide beginning at 4:00 you are supposed to divide by 2h in the central difference. I'm sorry for that typo.
Please note: In the slide beginning at 5:00 you are supposed to divide by 2h in the central difference. Thanks @ChristianDreyerSrbu for pointing this out to me!
Aren't you supposed to divide by 2h in the central difference?
Ah dang it. You mean on the slide starting at 5:00? Thats true, I made a typo there. Thanks for pointing this out, I will add some notifications there!
There is an error around 8:40, coming from a misrepresentation of Zorn's lemma B is not such that T⊆B for all T in P This makes no sense - one can easily obtain G and H in P such that the union of G and H is not in P B is such that if B⊆T and T in P, then B = T
My formulation is indeed not correct there - I should have noted the definition of maximality there. Nevertheless the set B obtained by Zorn's lemma will give us the correct set. Thanks for pointing it out.
I wonder if your version could be a part of the official Coin-OR binary download section. The versions there are not parallel - in times of ubiquitous multi-core systems. I usually use cbc with Python mip but I'm not sure about the performance of multithreading. With Python pulp there comes a parallel version of cbc.exe that is a) much slower than yours and b) has some issues with hanging on thread locks. I replaced this with your version and now all is working quite well.
Great talk. I am waiting for next part 🥺
Thanks! I am working on it - it is 75% finished. However due to a new job and some other changes in my life I didn't had much time to work on my youtube channel. But this channel is not dead, I promise!
I was also able to set up my Cbc project in v17. Thank you so much for this video!
The basis which is used in finite or infinite vector space and hamel basis is equal or it is the specialization of hamel basis??
Can you specify your question? As I showed in this video for infinite dimensional spaces a Hamel basis is always uncountable, while a Schauder basis is countable by definition, hence they cannot be the same. A Schauder basis might be a subset of a hamel basis. The problem here is: You cannot explicitly construct a Hamel basis. In finite dimensions however everything is the same, there is not need to differentiate between Schauder and Hamel basis. I hope I could answer your question, if not feel free to ask again!
@@franksmathematics9779ok, I get it.Thanks.
very nice video! thank you my friend👍
What change should be made in the code if the reference input is a time-varying signal?
Hi, can you please explain your question a little bit? Do you want to add an additional time-dependent source/force inside the ODE? In this case no special changes have to be made: Just stick with the descretization and add the pointwise evaluation of your time-dependent source.
In the code given by you, the final desired end effector position is (x,y)=(2,2). Instead of this, if I want to make the end effector to follow a sinusoidal trajectory, say x=0.5sin(wt), y= 0.5cos(wt), what change should I make in the code.@@franksmathematics9779
Thank you!
Great! Thank you for the effort of making such an informative and detailed video about a Newton-type method.
Thank you!
If you have full slide of functional analysis then plz send this
Excellent explainer! looking forward to the next part
Thank you! I am quite busy at the moment but I am still working on the next part(s). Stay tuned!
Thank you Frank! This helped me setup my c++ cbc project
it is very good. Great job.
one question: is it possible to make CBC work in Visula Studio Code? I tried to follow the steps but in Visual Studio Code I can't get it to work using Pyomo in Python. I need to use the solver in optimization problems! I would greatly appreciate your help! greetings!!
Thank you for this video. I have a clearer understanding of the topic. But adding c(x) confuses me. I have a problem at hand that gives me no solution from matrix. Is there a chance you can explain it to me? The problem is d2u/dx2=-sinx u'(0) = 1 and u'(pi)=-1 I took h=pi/3 but my matrix does not give me a solution and I do not have any c value to add.
Hi, the problem is that your problem u''(x) = -sin(x) and u'(0) = 1 and u'(pi) = -1 has no unique solution. It is the same argument as in the video: If w(x) is a solution to your problem, so is w(x) + c with an arbitrary chosen but constant c (just plug it in the problem and compute the derivative). If the exact analysis has no unique solution, how can we expect the numerical approximation to have an unique solution? We simply cannot expect this behaviour. Here this will result in an irregular system matrix. I hope this helps to clarify the problem.
@@franksmathematics9779 Thank you so much
Awesome video 👏 love from India 🎉❤
This was the best video I've seen on this topic so far! Thank you
Thank you very much for your very nice explanation. Could you share the textbook you referenced? (Just name is OK)
Unfortunately I do not have a textbook available where this is described in the way I did. I did a short check on my textbooks but they also do not cover the preconditioned CG method. Most of the stuff I present here is taken from papers - If papers are okay I can give you some names...
@@franksmathematics9779 If you can, i would be appreciated thank you very much
@@heesangyoo1536 It took me a while but I found the source where most of the results are taken from - but unfortunately it is written in german and I am afraid there is no english translation... I hope this helps at least a little bit: Kanzow - Numerik linearer Gleichungssystem (Numeric of linear systems of equations) Chapter 5.4 - Das präkonditionierte CG-Verfahren (The preconditioned CG-Method) Here is the link: link.springer.com/book/10.1007/b138019 If I stumble something else I will let you know...
How could we solve this problem in a reduced order? Since all state variables are a function of the control input u, we could just solve the optimization problem to find the optimum value of u and, consequently, the optimum state values will be derived from the state equations. To this end, the system must have the controllability criterion, which is the case here.
Hi, in the end most of this problems work in the way as you described: Getting rid of the state variable y and reduce this problem to an optimization problem which only depends on the control u. However to do so there is a lot of theory (especially theory of ODEs like solvability, reachability, controllability and so on) which is skipped in this video, as the focus was on the numerical implementation. If you think of a more basic approach (do not use the magic black box fmincon) like a (semismooth) Newton method you have an update step of the form u_{k+1} = u_k + s(u_k) where s is a (more or less complex) function. And here now comes the fun part: To compute s(u_k) you have to solve the ODE, or a linearized form of it - sometimes even multiple times. Thats one way to solve it in a reduced order, I hope this helps. Edit: In my video about the Semi-smooth Newton Method I did a reformulation of a constrained optimization problem (and nothing more is this problem in the video) to a fixed point equation, which is solved by a Newton method. A similar approach can be done here: th-cam.com/video/zOIMH6fNFOA/w-d-xo.html
@@franksmathematics9779 Thanks for your comprehensive reply. I would assume that solving the reduced order problem with fmincon (if, for the sake of simple implementation, my choice is that at the moment) is only possible once we just have the input constraints and not state constraints. In this case, the state equation can be easily integrated into the objective function recursively (I think this approach works). In other cases where we have state constraints or state-dependant constraints on the input, we must also include the states as the decision variables. Am I right?
Yes this is correct. (depending on the type of constraints) Without state constraints your approach should work. You just need a straightforward implementation of a ODE solver (this could be some recursion of course). Are you going to implement this? If so, keep me updated! State constraints, especially inequalities in ODE/PDE optimization are always ugly. As far as I know there is no direct solver or technique how to include them into a reduced formulation. I once wrote a paper how to tackle them with some sort of augmented Lagrange function, which however is again some sequence of optimization problems...
@@franksmathematics9779 I work with LTI state equation and would like to consider different objectives with and without state constraints. I will share the results.
@@franksmathematics9779 Feasible Direction Algorithm handles state-dependant input constraints in the reduced format. The effort is needed to convert state constraints to state depending input constraints
Using (x,y) for inner product is not good. Pretty much everybody uses (x,y) for vectors. it's better to stick with <x,y> for inner product.
This depends. In my area of research (functional analysis / optimal control) it is very common to use (x,y) for inner products.
@@franksmathematics9779 How do you write a tuple of vectors x,y instead of (x,y) ? By the way, using (x,y) for vector tuples is common in many optimization books. I am quite surprised it's not the case in your optimal control related field.
@@ncroc Fair enough, good point. To be more precise I usually add an index like (x,y)_X to indicate that I use the inner product in the space X. However I usually drop this index when it is clear from the context. I have to admit that this may lead to some missunderstandings with tupels if you are using tupels along with inner products.
Very clear explanation. Thanks.
(You say contradiction when you mean contraction)
In my defense: English is not my mother tongue ;) But yes, you are right, but I hope it is clear from the context.
Such a good explanation!
Beautiful. Thank you.
Very clear! Waiting for the next parts!!
Thank you! I am quite busy at the moment but I am still working on the next part(s). Stay tuned!
Hi Frank can you send me your compiled version, I followed everything but got stuck at the end with a linking error
Hi Stuart. Sure, here you go: github.com/FranksMathematics/CBC_ReleaseParallel_Win86 This Repository includes the Visual Studio Files including the compiled files. Please note, that I only changed the Profile "ReleaseParallel" under x86. If you want it for another architecture you have to change it accordingly.
@@franksmathematics9779 Thanks I figured out my issue I was trying to link the x64 version of cbc with the win32 version of pthreads
great video, thanks
What about a vector space where every vector is a linear combination of an uncountably infinite set of basis vectors? So we need to integrate over the set of basis vectors? Does such a vector space exist?
Hi, i am not sure what do mean: Given an uncountabe infinite set of linear independent vectors (in some vector space V) - you then construct a new set containing all linear combination of these vectors? Your question is then: Does this set form a vectorspace? It is clearly a subspace of V, so all you have to check is the subspace property. If it is a vectorspace you can apply the results from this video. However dealing with uncountable sets is tricky, as you cannot simply write them down. Hopefully this answers your question? If not, please clarify your question please! Best, Frank
hi, thank you could you please make a video on solving hjb equation in case of optimal control problem (numerically)
Hi, i suppose you talk about the Hamilton-Jacobi-Bellmann equation? Thats an interesting topic/question for sure. I have dealt with these equations before and I will definitely put this on the list. At the moment I am preparing a series about optimal control problems and how to solve them. The HJB-equations would definitely fit there!
@@franksmathematics9779 sure thank you