🎯 Key Takeaways for quick navigation: 00:01 📊 *Overview of Grab's Order Processing* - Grab processes millions of orders daily, with the potential for tens of millions or more. - The video introduces the backend system handling Grab food and GrabMart orders. - Focus on understanding the real-world handling of orders after a user places a Grab Food order. 01:11 🎯 *Design Goals for Database Solution* - Design goals include stability, scalability, cost-effectiveness, and consistency. - Importance of distinguishing between transactional and analytical queries. - Examples of transactional queries critical for online order creation and completion. 03:40 🔄 *Distinction Between Strong and Eventual Consistency* - Grab distinguishes strong consistency for transactional queries and eventual consistency for analytical queries. - Strong consistency ensures real-time processing for critical transactional queries. - Eventual consistency is acceptable for less critical analytical queries. 04:24 🗃️ *Separate Databases for Transactional and Analytical Queries* - The first design principle involves using different databases for transactional and analytical queries. - Transactional databases are critical for real-time online order processing. - Analytical databases store historical and statistical data and keep data for a more extended period. 05:33 ⚖️ *Benefits of Different Databases for Query Patterns* - Different databases fulfill various query patterns and requirements. - Enables better stability by selecting databases tailored to specific query types. - Addresses the challenge of balancing real-time processing and historical/statistical data storage. 06:33 🔄 *Data Ingestion Pipeline for Consistency* - Introduction of the second design principle: Data Ingestion Pipeline. - Explains how the pipeline ensures consistency between transactional and analytical databases. - Orders are initially stored in the OLTP database and asynchronously pushed into the data ingestion pipeline. 07:40 🌐 *Architecture Details of OLTP Database* - OLTP database contains two categories of queries: key-value queries and batch queries. - DynamoDB is used for transactional queries due to its scalability and high availability. - Challenges and solutions for handling high-traffic queries and maintaining full capacity. 09:31 💡 *DynamoDB Features and Adaptive Capacity* - DynamoDB's adaptive capacity handles hotkey traffic by distributing higher capacity to high-traffic partitions. - Explanation of adaptive capacity mechanism to optimize usage based on traffic. - Overview of DynamoDB's three-way replication for stability and availability. 11:52 🔄 *Global Secondary Index for Batch Queries* - Introduction of DynamoDB Global Secondary Index (GSI) for supporting batch queries. - GSI acts like a normal Dynamo table and enables querying by attributes other than the primary key. - Use of GSI to facilitate batch queries like "get ongoing orders by Passenger ID." 14:54 🔍 *Details on DynamoDB Global Secondary Index* - Explanation of GSI as a table with its own partition key. - GSI allows querying based on attributes other than the primary key. - Comparison to materialized views, where data is duplicated for optimized querying. 15:49 🔄 *Data Retrieval and Tables Structure in DynamoDB* - DynamoDB usage for order and passenger retrieval. - Explanation of key-value and batch queries in DynamoDB. - Introduction to DynamoDB Global Secondary Index (GSI) for batch queries. 16:30 🕒 *Data Retention Challenges in DynamoDB* - DynamoDB's time-to-live feature and its impact on data retention. - Challenges with adding time-to-live (TTL) to large tables. - Strategy to manually delete items without TTL attribute. 17:10 🔄 *Choice of Analytical Database (MySQL) and Data Retention* - Adoption of MySQL for analytical purposes. - Decision to use RDS (Relational Database Service) due to maturity. - Limiting data retention in DynamoDB to three months. 18:19 📊 *Data Ingestion Pipeline and Message Handling* - Usage of Amazon Kinesis Data Streams for the data ingestion pipeline. - Handling failures through Amazon Simple Queue Service (SQS). - Implementation of back-off retry for stream events. 19:32 🔄 *Ensuring Consistency in Data Ingestion Pipeline* - Back-off retry strategy for stream events and database level consistency. - Utilization of Dead Letter Queue for unsuccessful retries. - Possibility to rewind stream events from Kafka in worst-case scenarios. 21:05 🔄 *Conclusion: Stability, Scalability, and Cost Efficiency* - DynamoDB for high availability in online order processing. - RDS for scalability in supporting business requirements. - Cost efficiency achieved through data retention strategies. 21:49 🛠️ *Areas of Improvement and Future Plans* - Exploration of NoSQL databases like Elastic for more complex queries. - Acknowledgment of potential improvements in the existing database solution. - Continuous evaluation and refinement of the current system. 22:43 🚀 *Key Takeaways and Insights for Application Design* - Understanding the strategy of handling millions of orders. - Distinction between transactional and analytical processes. - Adoption of a dual-database approach for optimized performance. Made with HARPA AI
Kesimpulan yang saya tanggkap utk Grab 2 Database 1 Dynamo DB for high traffic + transaction ---> Temporary data/Log transaction (auto delete longest record/manual delete) 1 MySQL RDS for recorded --> historical/analytical Yang jd pertanyaan. Kenapa OLTP nya ga pake MongoDB aja yah ? Malah MongoDB lebih powerfull utk recorded tanpa hrs ada dependencies data. Dan lightweight.
@@ProgrammerZamanNow Kalo di AWS sih emang mahal. Serba dikenain charge. Load Balancer + Auto Scalling AWS sih emg powerfull. Ya tapi itu. Ada harga ada kualitas. Tq mas tambahan ilmunya...
Kalo yg saya tangkap, grab ini kan lini bisnisnya B2C, Alias business to costumer. Sehingga transaksi jumlah besar itu udh pasti hukumnya. Dan yg mereka butuhkan adalah ELB dari amazon. Dan supaya primary DB nya yg OLTP yg critical ini bisa running well, akhirnya pakai lah yg opinionated, jd 1 vendor. Kalo pke mongoDB lg ntr beda server lagi. Jd kesimpulannya ELB yg lebih mereka butuhkan daripada NoSQL DB nya. Misal Mongo DB bisa di run di ELB cloud nya AWS, bisa jd mereka pertimbangan buat pake MongoDB. Firebase jg salah satu kompetitor sexy, sayangnya sama2 vendor locked.
00:08 Grab processes millions of orders per day with a distributed system. 02:25 Grab uses different databases to serve transactional and analytical queries. 04:42 Transactional and Analytical databases are used differently in Grab's order processing system. 07:03 Grab processes millions of orders per day by sending the data to OLTP and Eng data pipeline for processing. 09:19 DynamoDB is used by Grab for its transaction database because it is scalable and highly available. 11:30 DynamoDB uses adaptive capacity to handle hot key traffic 13:43 DynamoDB uses Global Secondary Index (GSI) for batch queries 15:51 Grab processes millions of orders daily using DynamoDB and MySQL. 18:00 Grab uses RDS and Kafka to process millions of orders per day. 20:12 Grab uses two databases for transactions and analytics with asynchronous synchronization using Kafka. 22:18 Grab processes millions of orders by using two different databases for transaction and analytical purposes. Crafted by Merlin AI.
dynamo gsi, mungkin lebih mriip ketika pakai trigger insert/update/delete di mysql kali ya mas. jadi automatis store/update/delete data ketika si table utama menjalankan operasinya
@@ProgrammerZamanNowtapi kalo ngomongin untuk resource dan performa dari dbnya kira2 ada efek tertentu gak ya mas dan kang? itu kan sama aja kayak duplikat tablenya nah apakah itu gak bikin jadi double si resourcenya kang? dan juga soal performa untuk si DB menduplikasi datanya juga itu bagaimana?
@@edricgalentino betul, pasti duplicate seperti NoSQL. tapi ada konsep single table design kok untuk dynamodb ini. saya juga sempet kesulitan di awal, karena kesalahan desain sehingga tagihannya tinggi
wah menarik banget si ini mas kalo ada lagi boleh di share ni mas, kedepannya kali aja bisa mas coba menjelaskan apa itu orchestrasi dan choreography untuk design architecture saat membuat aplikasi karena sekarang banyak banget udah mulai memakai choreography untuk proses transaksinya seperti kafka atau kafka connect untuk duplikasi database
Kalau genre yg mereka pake itu OLTP using key-value pair berarti firebase bisa dipake utk OLTP dong bang? Dia bisa generate autoID jg utk document numbernya. Sedangkan data nya jg unstructured, yg berarti dalam satu document bisa buanyak bgt column
Hmm berarti data yg dilihat user di aplikasi itu cuma bertahan 3 bulan ya, selebihnya disimpen buat analisis. *But why? Saya sebagai User jadi ga tau pernah makan apa aja :D (jadi ga bisa repeat order lebih dari 3 bulan, karena history-nya hilang)
@@thearka443 setau saya semakin dalam datanya berada walaupun get 1 data performa db akan di uji gan. jika data ada 1jutaan pasti kerasa sekali impactnya. walau pakai indexing tetap akan sama. jadi kondisi tersebut hanya di lakukan jika dibutuhkan saja biasa di sisi internal untuk analisa data. Kalo misal ada 1000 user ambil data lama servernya pasti akan sesak juga. makanya dibatasin paling lama 1 tahun terarkhi (dependent seberapa besar server bisa menangani itu di sisi aplikasi) setau saya tokped dan lain juga ada batasan ada yang cuma 3 bulan terakhir, 6 bulan max 1 tahun. cmiiw 🙏
pak mau tanya, kalau mau ambil AI/Data Science itu harus belajar kayak Backend juga atau gimana pak ? soalnya masih bingung antara backend sama bagian AI/Data Science
sebenernya end goalnya lebih ke cost saving ya, performance mungkin ga jauh beda karena pada dasarnya mereka udah pakai dynamoDB dan Aurora sejak awal.
gak ada kan, on going order yang sampai 3 bulan? berapa jam juga selesai. bahkan kayaknya 3 bulan juga kelamaan, soalnya datanya sudah ada di mysql untuk liat history nya kan bisa liat di mysql
Yang susah nerapin sistem2 ideal gini kalo projectnya bergantung pada klien dan kliennya model ga mo tau, harus diiyakan semua atau langsung cabut. Contoh sistem dah berjalan 5 tahun tapi ditengah2 klien request seluruh detail trx tersimpan selama 5 tahun dan bisa export semua datanya ke satu excel file (1 bulan avg 35 juta record trx). Kalo dibilangin terlalu mustahil dan diberikan another problem solve klien akan mengancam tutup project dan tidak segan2 langsung pindah
Buset bree klo lu ga ngerti maksud kang eko brrti skill lu blm sampe sana haha Jangan komen aneh aneh kalo skill lu aja blm lebih dari kang eko. Kang eko aja karna channel PZN udh mencetak banyak programmer berkualitas karna ilmu yang dia ajarin lah elu cuman nyinyir doang wkwk
Keren penjelasannya mas eko
Martin Laura Garcia David Thompson Sharon
Bayangkan, kalau aplikasi pelayanan publik juga seperti ini, cuma bayangkan aja dulu hehehe
🎯 Key Takeaways for quick navigation:
00:01 📊 *Overview of Grab's Order Processing*
- Grab processes millions of orders daily, with the potential for tens of millions or more.
- The video introduces the backend system handling Grab food and GrabMart orders.
- Focus on understanding the real-world handling of orders after a user places a Grab Food order.
01:11 🎯 *Design Goals for Database Solution*
- Design goals include stability, scalability, cost-effectiveness, and consistency.
- Importance of distinguishing between transactional and analytical queries.
- Examples of transactional queries critical for online order creation and completion.
03:40 🔄 *Distinction Between Strong and Eventual Consistency*
- Grab distinguishes strong consistency for transactional queries and eventual consistency for analytical queries.
- Strong consistency ensures real-time processing for critical transactional queries.
- Eventual consistency is acceptable for less critical analytical queries.
04:24 🗃️ *Separate Databases for Transactional and Analytical Queries*
- The first design principle involves using different databases for transactional and analytical queries.
- Transactional databases are critical for real-time online order processing.
- Analytical databases store historical and statistical data and keep data for a more extended period.
05:33 ⚖️ *Benefits of Different Databases for Query Patterns*
- Different databases fulfill various query patterns and requirements.
- Enables better stability by selecting databases tailored to specific query types.
- Addresses the challenge of balancing real-time processing and historical/statistical data storage.
06:33 🔄 *Data Ingestion Pipeline for Consistency*
- Introduction of the second design principle: Data Ingestion Pipeline.
- Explains how the pipeline ensures consistency between transactional and analytical databases.
- Orders are initially stored in the OLTP database and asynchronously pushed into the data ingestion pipeline.
07:40 🌐 *Architecture Details of OLTP Database*
- OLTP database contains two categories of queries: key-value queries and batch queries.
- DynamoDB is used for transactional queries due to its scalability and high availability.
- Challenges and solutions for handling high-traffic queries and maintaining full capacity.
09:31 💡 *DynamoDB Features and Adaptive Capacity*
- DynamoDB's adaptive capacity handles hotkey traffic by distributing higher capacity to high-traffic partitions.
- Explanation of adaptive capacity mechanism to optimize usage based on traffic.
- Overview of DynamoDB's three-way replication for stability and availability.
11:52 🔄 *Global Secondary Index for Batch Queries*
- Introduction of DynamoDB Global Secondary Index (GSI) for supporting batch queries.
- GSI acts like a normal Dynamo table and enables querying by attributes other than the primary key.
- Use of GSI to facilitate batch queries like "get ongoing orders by Passenger ID."
14:54 🔍 *Details on DynamoDB Global Secondary Index*
- Explanation of GSI as a table with its own partition key.
- GSI allows querying based on attributes other than the primary key.
- Comparison to materialized views, where data is duplicated for optimized querying.
15:49 🔄 *Data Retrieval and Tables Structure in DynamoDB*
- DynamoDB usage for order and passenger retrieval.
- Explanation of key-value and batch queries in DynamoDB.
- Introduction to DynamoDB Global Secondary Index (GSI) for batch queries.
16:30 🕒 *Data Retention Challenges in DynamoDB*
- DynamoDB's time-to-live feature and its impact on data retention.
- Challenges with adding time-to-live (TTL) to large tables.
- Strategy to manually delete items without TTL attribute.
17:10 🔄 *Choice of Analytical Database (MySQL) and Data Retention*
- Adoption of MySQL for analytical purposes.
- Decision to use RDS (Relational Database Service) due to maturity.
- Limiting data retention in DynamoDB to three months.
18:19 📊 *Data Ingestion Pipeline and Message Handling*
- Usage of Amazon Kinesis Data Streams for the data ingestion pipeline.
- Handling failures through Amazon Simple Queue Service (SQS).
- Implementation of back-off retry for stream events.
19:32 🔄 *Ensuring Consistency in Data Ingestion Pipeline*
- Back-off retry strategy for stream events and database level consistency.
- Utilization of Dead Letter Queue for unsuccessful retries.
- Possibility to rewind stream events from Kafka in worst-case scenarios.
21:05 🔄 *Conclusion: Stability, Scalability, and Cost Efficiency*
- DynamoDB for high availability in online order processing.
- RDS for scalability in supporting business requirements.
- Cost efficiency achieved through data retention strategies.
21:49 🛠️ *Areas of Improvement and Future Plans*
- Exploration of NoSQL databases like Elastic for more complex queries.
- Acknowledgment of potential improvements in the existing database solution.
- Continuous evaluation and refinement of the current system.
22:43 🚀 *Key Takeaways and Insights for Application Design*
- Understanding the strategy of handling millions of orders.
- Distinction between transactional and analytical processes.
- Adoption of a dual-database approach for optimized performance.
Made with HARPA AI
Kesimpulan yang saya tanggkap utk Grab
2 Database
1 Dynamo DB for high traffic + transaction ---> Temporary data/Log transaction (auto delete longest record/manual delete)
1 MySQL RDS for recorded --> historical/analytical
Yang jd pertanyaan. Kenapa OLTP nya ga pake MongoDB aja yah ? Malah MongoDB lebih powerfull utk recorded tanpa hrs ada dependencies data. Dan lightweight.
mungkin cost nya, soalnya mereka kan pake cloud, kalo bayar mongo atlas, mungkin cost nya makin mahal
@@ProgrammerZamanNow Kalo di AWS sih emang mahal. Serba dikenain charge. Load Balancer + Auto Scalling AWS sih emg powerfull. Ya tapi itu. Ada harga ada kualitas. Tq mas tambahan ilmunya...
Kalo yg saya tangkap, grab ini kan lini bisnisnya B2C, Alias business to costumer. Sehingga transaksi jumlah besar itu udh pasti hukumnya. Dan yg mereka butuhkan adalah ELB dari amazon. Dan supaya primary DB nya yg OLTP yg critical ini bisa running well, akhirnya pakai lah yg opinionated, jd 1 vendor. Kalo pke mongoDB lg ntr beda server lagi. Jd kesimpulannya ELB yg lebih mereka butuhkan daripada NoSQL DB nya. Misal Mongo DB bisa di run di ELB cloud nya AWS, bisa jd mereka pertimbangan buat pake MongoDB. Firebase jg salah satu kompetitor sexy, sayangnya sama2 vendor locked.
Mongo atlas cost nya d aws agak anu...
@@rifkiaz oo bisa jadi karena pgn lebih murah.
00:08 Grab processes millions of orders per day with a distributed system.
02:25 Grab uses different databases to serve transactional and analytical queries.
04:42 Transactional and Analytical databases are used differently in Grab's order processing system.
07:03 Grab processes millions of orders per day by sending the data to OLTP and Eng data pipeline for processing.
09:19 DynamoDB is used by Grab for its transaction database because it is scalable and highly available.
11:30 DynamoDB uses adaptive capacity to handle hot key traffic
13:43 DynamoDB uses Global Secondary Index (GSI) for batch queries
15:51 Grab processes millions of orders daily using DynamoDB and MySQL.
18:00 Grab uses RDS and Kafka to process millions of orders per day.
20:12 Grab uses two databases for transactions and analytics with asynchronous synchronization using Kafka.
22:18 Grab processes millions of orders by using two different databases for transaction and analytical purposes.
Crafted by Merlin AI.
Maddison Villages
Thank you gan!
mantappp om,
jadi bisa baca bareng, biasanya baca-baca sendiri belum tentu paham wkwk, ini dijelasin pulak, mantap
Semoga bermanfaat 👍
iya bener mas wkwkw, bahkan malah salah paham
Mengapa ya data ingest harus lewat kafka dulu knp gk langsung ke mysql?
biar ada buffer kalo speed mysql gak bisa ngejar speed dynamodb nya
@@ProgrammerZamanNow makasih pak eko
Mas boleh request tutorial use case spt ini yang sederhana aja? pakai java/go/node mungkin
Wah gas pake Eko.
up
bisa running dynamodb emulator secara local bisa pake docker kok
wah ilmu bagus ni, lanjutkan pak Ekoo
dynamo db bisa kok di install di laptop tp yg versi dynamodb-local
dynamo gsi, mungkin lebih mriip ketika pakai trigger insert/update/delete di mysql kali ya mas. jadi automatis store/update/delete data ketika si table utama menjalankan operasinya
beda, kalo itu kita harus bikin trigger manual nya
@@ProgrammerZamanNowtapi cara kerjanya sama ndak ya, blum pernah pakai soalnya
Kalo DynamoDB nya di replace sama Cloud Firestore kira2 memungkinkan kah? kira2 apa ya pros and cons nya?
Asik nih bisa tau arsitektur system yang lumayan gede trafficnya...
Kategori akun ada gacor, normal, gagu di mana
Gacor =30 orderan
Normal= 18 orderan
Gagu =3 orderan
Saran bang,, ane driver grab, gmn bang agar akun ane bisa gacor
Pak, bahasa program yang Bapak pertama kali belajar bahasa apa ?
php
GSI itu kayak kita bikin key / index baru pada kolom di tabel, supaya nanti bisa di sorting / grouping berdasarkan index baru tersebut
mantap
@@ProgrammerZamanNowtapi kalo ngomongin untuk resource dan performa dari dbnya kira2 ada efek tertentu gak ya mas dan kang? itu kan sama aja kayak duplikat tablenya nah apakah itu gak bikin jadi double si resourcenya kang? dan juga soal performa untuk si DB menduplikasi datanya juga itu bagaimana?
@@edricgalentino betul, pasti duplicate seperti NoSQL. tapi ada konsep single table design kok untuk dynamodb ini.
saya juga sempet kesulitan di awal, karena kesalahan desain sehingga tagihannya tinggi
saya pakai dynamoDB, id nya gak bisa di set manual harus dari mereka,, itu setiap selesai simpan apakah bisa return id no brp yang tercreate ?
@@wahyono1739 ketika add item ke tabel maksudnya mas?
gak nyangka kalau perusahaan sekelas grab masih pakai MySQL
Kenapa gak pake vittes aja
mungkin pengen pake solusi cloud aws
Kalau bahas praktikal solution seperti ini, saya sangat tertarik.
Ilmu development-nya keluar semua.
Skil spesifik apa yang kira kira yg bagus diplajari sekarang dalam dunia IT
hash map
mantap konten2 kyk gini nih, terbaik PZN 👍
wah menarik banget si ini mas kalo ada lagi boleh di share ni mas, kedepannya kali aja bisa mas coba menjelaskan apa itu orchestrasi dan choreography untuk design architecture saat membuat aplikasi karena sekarang banyak banget udah mulai memakai choreography untuk proses transaksinya seperti kafka atau kafka connect untuk duplikasi database
orkestra yang musik bosku
Hoo, ada aws jg ya..
Kirain grab exclusive di azure..
Sering sering pak kaya gini. menarik
Kok keren.. Gilee 👏
Ini Grab keren banget dah mau bagi real-world case beginian..
🙇♂
grab mas
@@MuhammadRizki-cl3ru Wah.. Beda rupanya..
Oke.. Saya revisi..
Pak, saya mau request. tolong bahas cara handle perubahan stock yang bisa berubah dari mana aja yang case nya seperti yang pernah di post di ig.
Mantap pak, sering-sering bahas use case seperti ini
2 DB, 1Dynamo Db, 1MYSQL to history, i think it..
Kalau genre yg mereka pake itu OLTP using key-value pair berarti firebase bisa dipake utk OLTP dong bang? Dia bisa generate autoID jg utk document numbernya. Sedangkan data nya jg unstructured, yg berarti dalam satu document bisa buanyak bgt column
bisa aja, cuma grab mungkin pengen pake aws, firebase kan punya google
Wow jutaan order, kantor gw baru sanggup 50an ribu 😢😢
Keren bang....Sering2 bahas ginian gas
Boleh mas buat live projectnya dong
Biar kebayang 😊
Mantap penjelasannya
Hmm berarti data yg dilihat user di aplikasi itu cuma bertahan 3 bulan ya, selebihnya disimpen buat analisis.
*But why? Saya sebagai User jadi ga tau pernah makan apa aja :D (jadi ga bisa repeat order lebih dari 3 bulan, karena history-nya hilang)
biasanya setelah itu apalagi data banyak jadi issue. semakin dalam semakin besar loadnya. jadi menurut saya sudah lebih dari cukup 3 bulan itu.
@@skzulka Tapi bukannya kalo transaksi udah selesai, data itu ya udah jadi data "diam", cuma bisa di view (get) sebagai history aja. Tetep berat kah?
@@thearka443 walaupun diam juga kan tetep makan storage dan memory yang artinya bakal nambah cost
@@thearka443 setau saya semakin dalam datanya berada walaupun get 1 data performa db akan di uji gan.
jika data ada 1jutaan pasti kerasa sekali impactnya. walau pakai indexing tetap akan sama.
jadi kondisi tersebut hanya di lakukan jika dibutuhkan saja biasa di sisi internal untuk analisa data. Kalo misal ada 1000 user ambil data lama servernya pasti akan sesak juga. makanya dibatasin paling lama 1 tahun terarkhi (dependent seberapa besar server bisa menangani itu di sisi aplikasi) setau saya tokped dan lain juga ada batasan ada yang cuma 3 bulan terakhir, 6 bulan max 1 tahun. cmiiw 🙏
3 bulan itu di transaction database, kalo liat history tetap lewat analytical database, jadi masih bisa diliat datanya
Akhir nya gas pak
pak mau tanya, kalau mau ambil AI/Data Science itu harus belajar kayak Backend juga atau gimana pak ?
soalnya masih bingung antara backend sama bagian AI/Data Science
kl ambil ai/data di kuliah biasanya ambil jurusan data science jd bljr kyk machine learning,ai,visdat,deep learning gt2
@@rafi4637 iya kalo itu saya ngerti, cuma cara belajarnya sama kayak backend atau enggak itu sih yg masih bingung
@@Cebong-qj1sv hrs ngerti statistik gt2 sih kl ngoding ttg data pk python biasanya sih g se kompleks kyk web dev gt tp teorinya hrs paham bgt
keren pak, pembahasannya
bahas lagi pak, materi tentang teknologi teknologi yang di pakai perusahaan kaya gini
sebenernya end goalnya lebih ke cost saving ya, performance mungkin ga jauh beda karena pada dasarnya mereka udah pakai dynamoDB dan Aurora sejak awal.
Kalau sempet tolong bikin kan study case nya mas eko
kira-kira apa alasan buat keep data nya sampe 3 bulan ?
gak ada kan, on going order yang sampai 3 bulan? berapa jam juga selesai. bahkan kayaknya 3 bulan juga kelamaan, soalnya datanya sudah ada di mysql
untuk liat history nya kan bisa liat di mysql
Kelamaan ngabisin resources 😊
keren si pzn content progammingnya unique ... mantep bang
Pertalite pak
Yang susah nerapin sistem2 ideal gini kalo projectnya bergantung pada klien dan kliennya model ga mo tau, harus diiyakan semua atau langsung cabut. Contoh sistem dah berjalan 5 tahun tapi ditengah2 klien request seluruh detail trx tersimpan selama 5 tahun dan bisa export semua datanya ke satu excel file (1 bulan avg 35 juta record trx). Kalo dibilangin terlalu mustahil dan diberikan another problem solve klien akan mengancam tutup project dan tidak segan2 langsung pindah
dikasih contoh aja mas, history bank aja dibatasin.. history pembelian tokopedia juga dibatasi.. klo kliennya tetap ngeyel keren sih
Pertadex
pertamax
Gw pengen lu langsung jelasin, bukan malah baca lah mahami 1/1 kata bahasa ingris, keliatan lu gak prepare .
ya maaf bang, langsung aja baca link nya di deskripsi, jangan marah2
lah kok ngatur :v
Women ☕🗿
Buset bree klo lu ga ngerti maksud kang eko brrti skill lu blm sampe sana haha
Jangan komen aneh aneh kalo skill lu aja blm lebih dari kang eko.
Kang eko aja karna channel PZN udh mencetak banyak programmer berkualitas karna ilmu yang dia ajarin lah elu cuman nyinyir doang wkwk
@@kukuhaditya9228 wkwkw betul bang, hanya chanel PZN paling niat buat playlist dasar dari pemula hingga mahir