note to the editor: please stop cutting away from the code so quickly. we're trying to follow along in the code based on what she's saying. at that moment, we don't need to cut back to the shot of her face. we can still hear her voice in the voiceover.
I think the time the code was displayed when she went trough each line was quite sufficient. The code is very readable (except for the typo where "words" suddenly became "splitlines") and reading the code while she explains would most likely on distract you from the explanation she is giving IMHO. If you are looking for a more practical solution i would recommend you to just pause the video and read the code before she explains it step by step.
Fully agree. The quick switching was very annoying when trying to read the code. Also would be helpful if the editor could highlight the active line she is talking about.
The RDD API is outmoded as of Spark 2.0 and in almost every use case you should be using the Dataset API. You lose out on a lot of improvements and optimizations using RDDs instead of Datasets.
A great example of how programming languages are a reasonably efficient mechanism to communicate sections of program and how natural language really is not.
Is there any meta analysis on the usefulness of bigdata analysis? How often do jobs get run that either produce no meaningful data or don't produce any statistically significant data?
For anyone interested, although the documentation is awful for Apache Flink and it doesn't support Java versions beyond 8, it at least lets you do setup on each node. Spark does not have any functionality for running one-time setup on each node, which makes it infeasible for many use cases. These distributed processing frameworks are quite opinionated and if you're not doing word count or streaming data from one input stream to another with very simple stateless transformations in between you'll find little in the documentation or functionality. They're not really designed for use cases where you have a parallel program with a fixed size data source known in advance and want to scale it up as you would by adding more threads, but more for continuous data processing.
its bit silly but i cant understand 100% because english isnt my first language , hope someone could add english subs on every this channel videos because i found computerphile videos are easy to understanding because excellent explanation
Sorry for redundancy, just verifying my understanding. Do I understand it correctly that (when running this example in a cluster) collect runs the 'reduceByKey' against the results on each node, and then reduces to a final result. Say on Node 1 I have count of word 'something' = 5 , on Node 2 I have count of word 'something' = 3, then collect combines from those two nodes into a count of 'something' = 8, And so on...?
content is nice, well explained. BUT the camera and editor are so bad. We are not here for a documentary, the computer shot from her shoulder is completely useless and distracting, if you want to use your cuts, use something like the picture in picture but please let us focus on the code!!
First? sorry, I've never watched a video when it said it was posted "25 seconds" ago, and so it would be weird if I were actually first. Good Video, I feel like I stink at data analysis, but I'm more experienced than most in my organization so...
note to the editor: please stop cutting away from the code so quickly. we're trying to follow along in the code based on what she's saying. at that moment, we don't need to cut back to the shot of her face. we can still hear her voice in the voiceover.
I think the time the code was displayed when she went trough each line was quite sufficient. The code is very readable (except for the typo where "words" suddenly became "splitlines") and reading the code while she explains would most likely on distract you from the explanation she is giving IMHO. If you are looking for a more practical solution i would recommend you to just pause the video and read the code before she explains it step by step.
Fully agree. The quick switching was very annoying when trying to read the code. Also would be helpful if the editor could highlight the active line she is talking about.
Yeah thank you.
I understood some of those words.
The RDD API is outmoded as of Spark 2.0 and in almost every use case you should be using the Dataset API. You lose out on a lot of improvements and optimizations using RDDs instead of Datasets.
Can you do Apache Kafka next? How do they compare?
ahh.. so refreshing after taking a week break from dev work and staying away from non dev topics. Lol, I love our field. Like music to my ears
pretty sure theres a typo in that code. "splitLines" doesnt exist and is probably supposed to be words.map(...) instead
A great example of how programming languages are a reasonably efficient mechanism to communicate sections of program and how natural language really is not.
Wow congrats on the content. You were able to explain it in a concise, yet logical and detailed way. nice
Is there any meta analysis on the usefulness of bigdata analysis? How often do jobs get run that either produce no meaningful data or don't produce any statistically significant data?
For anyone interested, although the documentation is awful for Apache Flink and it doesn't support Java versions beyond 8, it at least lets you do setup on each node. Spark does not have any functionality for running one-time setup on each node, which makes it infeasible for many use cases. These distributed processing frameworks are quite opinionated and if you're not doing word count or streaming data from one input stream to another with very simple stateless transformations in between you'll find little in the documentation or functionality. They're not really designed for use cases where you have a parallel program with a fixed size data source known in advance and want to scale it up as you would by adding more threads, but more for continuous data processing.
Thank you for teaching an old man new things.
Brady Please make a video on Kubernetes
feels like this video is four years too late ... :-/
She refers to an early example. Did I miss that video? Otherwise, nicely done. Love learning about distributed computing.
Search for MapReduce on Computerphile
These data ones are really good! Keep them coming!
It's so clear and easy after the explanation! I will be waiting for more vids about clustering and distributed computing)
More of these, please. More big data.
its bit silly but i cant understand 100% because english isnt my first language , hope someone could add english subs on every this channel videos because i found computerphile videos are easy to understanding because excellent explanation
She's damn good at explaining and easy to listen to, any plans of having her host other episodes?
(sorry for "her" I don't know her name).
I wish she also talked a little about Spark's ability to deal with data streams
Good old Scala.
typo in line 32 for using `splitLines` instead of `word`?
"RDD is basically an array distributed across the cluster" - genius
Computerphile will be excited to learn that tripods exist.
Really interesting video! I have done some MapReduce before, but I haven’t came across Apache Spark
Great explanations. Of course there are many things going on behind the scenes, but good overview.
woohooo rebecca is back
Thank you so much. This was an incredible explanation
Was so excited to see this posted :) I'm a Cassandra professional.
Please give time measurements comparing single node with multi node execution. What is the overhead?
Where are the extra bits?
yeah vertical scaling and modular based data handing similar Hadoop ,Hive. framework library.
can anyone please suggest books to learn about distributed systems?
What programming language is she using??
Sorry for redundancy, just verifying my understanding. Do I understand it correctly that (when running this example in a cluster) collect runs the 'reduceByKey' against the results on each node, and then reduces to a final result. Say on Node 1 I have count of word 'something' = 5 , on Node 2 I have count of word 'something' = 3, then collect combines from those two nodes into a count of 'something' = 8, And so on...?
What is the architectural difference between spark and map reduce ?
Ohhh, she is using VSCode! I love VS Code :D
The first time I learned about Apache Spark, I was looking up documentation for another framework named Spark.
Would have liked it to be a bit more in-depth and technical, was too high level.
Do a video explaining AES!
This was very helpful
I really love your videos I would like to know if it is possible to watch them in French or at least with subtitles so that we can follow
Thank you for the great summary.
Looks like you could do a search engine in that.
really good summary thankyou!
1:19 Floppy drives? xD LOL
Apache Flink next please
Good video :)
Great video
thanks
Interesting video!
Please show some drawings or animations of data going back and forth between the noded.
More like this!!!!!!
Thanks, nice vid.
content is nice, well explained.
BUT
the camera and editor are so bad.
We are not here for a documentary, the computer shot from her shoulder is completely useless and distracting, if you want to use your cuts, use something like the picture in picture but please let us focus on the code!!
I study bioinformatics handling txt files many gigabytes in size and this could be so handy
Hi friends!
@3:16 line 12 is wrong. Great review 👍 otherwise!
First? Does this matter? No. Go build a cluster and be happier!
First? sorry, I've never watched a video when it said it was posted "25 seconds" ago, and so it would be weird if I were actually first.
Good Video, I feel like I stink at data analysis, but I'm more experienced than most in my organization so...
What useless video : - slow down, explain slow , assume audience know not much
totally lost me 3 min into this video.
00000001
21st!!!
She's mumbling in the beginning... can't really hear her (American-born English speaker)
first!
First 😂