CENIC & Juniper Networks Boost Performance with 800 Gbps Capacity for the AI era

CENIC & Juniper Networks Boost Performance with 800 Gbps Capacity for the AI Era
Building on its strong relationship with Juniper Networks, CENIC has embarked on upgrading the backbone of its California Research and Education Network (CalREN) to 800 Gbps. Utilizing Juniper's PTX10002-36QDD router and 800ZR coherent pluggable optics, CENIC aims to enhance user peering experiences and facilitate/simplify the AI evolution – let researchers research and educators educate rather than worry about available bandwidth. At OFC 2025, Heavy Reading's Sterling Perrin discusses the implementation and benefits with CENIC VP of Engineering and Deputy CIO Robert Kwon and Juniper Networks Global Architect Robert Damon.
You’ll learn
Some of the benefits of 800G ZR technology
How CENIC supports research and education efforts in California
Who is this for?
Host
Guest speakers
Experience More
Transcript
0:02 [Music]
0:07 i'm Sterling Perrin I'm an analyst with
0:09 Heavy Reading now part of Omnia We're at
0:11 OFC 50th anniversary of OFC Uh talking
0:15 about 800 gig and the future of
0:17 networking and very happy to be joined
0:18 today by uh Robert Quan who is with
0:21 Scenic and uh Robert Damon who is with
0:24 Juniper Networks Hi guys welcome Hi
0:26 Thank you Great to be talking to you Um
0:29 I think everybody knows Juniper Let me
0:31 uh move right to to Scenic Uh Robert you
0:34 want to give us a little bit of an
0:35 overview of of uh who Scenic is and and
0:37 what what the organization does sure So
0:39 Scenic is a member um nonprofit that
0:41 supports the state of California and our
0:43 mission is to support the research and
0:45 education efforts of California by
0:47 providing a research and education
0:49 network We have 8,000 fiber miles of
0:51 fiber in the state of California that we
0:53 light up with our own DWDM system And we
0:57 we've been recently on on a journey
0:59 called NGI which is our next generation
1:01 infrastructure This infrastructure was
1:03 to modernize not just our protocol tech
1:05 stack but also our physical
1:08 infrastructure and what this has
1:10 culminated to is basically a 800 gig
1:12 deployment on the back on our backbone
1:15 as a production link between two of our
1:17 sites and now we're about to deploy 800
1:20 gig which we feel it's one of the first
1:22 campuses with San Diego State University
1:25 who is a R1 research institution Okay So
1:28 obviously research uh education networks
1:30 historically have been really leading
1:32 adopters of of new technologies uh fiber
1:35 optic based and and it sounds like you
1:37 guys fit kind of fit into that that
1:39 bucket for sure Uh you mentioned 800 gig
1:43 it's uh at OFC and just generally in
1:45 research we're seeing it's an important
1:46 year for for 800 gig Uh why don't we
1:48 kind of just talk about that component
1:50 of your network Um we're talking about
1:53 800 ZR is is the uh is the
1:55 implementation that you guys are doing
1:57 Yes we're using 800 gig ZR on our
1:59 network and it's been such a great
2:01 deployment It was such a quick
2:04 deployment from what used to take days
2:06 having a transponder shelf or a DCI
2:08 shelf is now just a pluggable in our
2:10 devices And we're seeing our
2:13 implementation time go from days of
2:15 procuring space and power in our racks
2:17 and our collocations to literally 15
2:20 minutes of just plugging in the optics
2:23 And it's been such a easy turnup that
2:27 it's it's changed the way that we look
2:28 at how to augment our backbone And it it
2:31 hasn't been what we thought would be as
2:33 as far as losing features The optics all
2:36 have you know fck and post spec And
2:39 we're able to see the right types of
2:40 details from the optics and from the
2:42 routers to make sure that these things
2:44 are performing the correct way And also
2:47 from a business perspective we're seeing
2:49 close to a 60% reduction in TCO which of
2:54 course makes it a lot more viable for us
2:56 to deploy these in a lot more places
2:58 than historically we would have to
3:00 budget and spread these budgets over a
3:02 couple cycles to be able to support
3:04 those high bandwidth needs that our
3:06 researchers and our education um
3:08 constituents need What What was So you
3:11 know the the the from to the two the two
3:14 you're you're going and doing 800 ZR
3:16 what what was the previous uh network
3:19 you know speed and and topology sure So
3:22 we were at a N by 100 So we were doing
3:25 three 4x100 and at some places 5x100 Um
3:29 we were in the middle of deploying the
3:31 400 gig uh ZR pluses or high powered ZR
3:35 but then 800 gig came a lot quicker than
3:37 what we expected it to come So we just
3:39 started moving towards 800 gig as
3:41 opposed to focusing on our 400 gig So of
3:44 course there's going to be 400 gig
3:45 deployments just because of the fiber
3:47 quality that we have Um our fiber is not
3:50 that great u as far as like the loss
3:52 that we have in certain spans due to the
3:54 fiber type and the issues that I've had
3:57 But these optics have performed great
3:59 even on our fiber and we're currently
4:02 running it across a 100 km span of eth
4:05 fiber and it's it's been really great So
4:09 basically uh you you did have some 400
4:12 ZR in your network Yes we did But you've
4:14 decided going forward for the most part
4:17 400 ZR won't really be used It's it's
4:19 really going right to to 800 ZR because
4:21 of all the benefits including the the
4:23 TCO Yes The only corner case would be
4:27 there's a the 800 gig optics can also do
4:29 400 gig very long distances by changing
4:32 the Yes the um the modulation right so
4:37 we're hoping to leverage those on the
4:38 spans that are very very far and have a
4:41 lot of loss so that we can get 400 gig
4:44 over those spans But wherever we can
4:46 we're going to be doing 800 gig as much
4:48 as possible You've got the capacity
4:50 needs for that today or was this more
4:52 like you know far future planning to to
4:54 get to 800 gig so I we have the capacity
4:57 needs just from our our users using the
5:00 internet Yeah But then but also we also
5:02 see the need for these larger bandwidths
5:05 just because and the flows are getting
5:07 bigger We support a lot of research and
5:09 education efforts It's primarily
5:11 research which requires high bandwidth
5:13 or high data transfers to be able to
5:15 move their data sets from one place to
5:17 another So one of the things that Scenic
5:20 has been helping out was with the AI
5:22 efforts and AI has a lot of data sets
5:24 that have moved around but also um we
5:27 connect our members to the large hon
5:29 collider So to be able to get those data
5:31 sets and they produce a large amount of
5:33 data right yeah Your your uh customers
5:36 are users for people not familiar with
5:38 how the research and education networks
5:40 are Your your end customers are are the
5:43 universities themselves right the these
5:45 are who's feeding into your network
5:47 correct yes So we support more than the
5:49 universities So we support the
5:51 universities of course the higher eds
5:53 but we also support the K12s the
5:55 libraries and cultural institutions like
5:57 the Getty and SFJS So there's also
6:00 tangenial benefits to places like the
6:02 Getty that have a lot of data store of
6:04 of images and video that having this
6:07 larger bandwidth across the backbone
6:09 helps a lot of people to be able to get
6:10 to that That's great That's Yeah So let
6:12 me uh bring in Robert Damon from Juniper
6:14 Obviously Juniper um had some role in in
6:17 the scenic network um and actually I
6:20 don't know historically but certainly
6:21 future Maybe talk a little bit about
6:23 your role in in Juniper's role in Scenic
6:26 um you know ongoing and and and with the
6:28 800 ZR stuff Oh thank you Sterling Yeah
6:31 and Juniper's been a proud SP uh partner
6:33 with Scenic for many years now and as
6:35 they moved from 100 gig to 400 gig and
6:38 now 800 gig we've been along with them
6:41 You know as Robert was describing Scenic
6:43 serves not only the most you know elite
6:46 research units around here but they also
6:48 serve over 8,000 K through20
6:51 institutions some of which are in remote
6:54 locations And u they've leveraged our MX
6:57 our ACX and now our PTX platforms to be
6:59 able to serve them and make their needs
7:01 And we're really excited that they chose
7:03 our PTX 100002 platform to be able to
7:07 grow their 800 gig network You know the
7:09 PTX 100002 has been a really powerful
7:13 platform architected with the Express 5
7:16 ASIG with a single chip 28.8 TB platform
7:19 uh advanced power delivery to be able to
7:21 support 36 ports of uh 800 gig ZR and as
7:24 you know the power delivery requirements
7:26 and the thermal management required for
7:28 800 gig ZR is really critical in these
7:31 networks So we really feel that we're
7:33 going to be able to support the needs
7:34 today and tomorrow for the scenic
7:36 network Yeah Excellent um 800 ZR
7:39 obviously um you know it's it's well
7:41 it's one of the big topics at this show
7:44 um we weren't sure how quickly it would
7:46 come but it's here so that's been
7:48 interesting um organizations like uh
7:51 Scenic are really pioneering that um
7:53 what you know I mean it's a lot it's
7:55 almost like maybe we don't need to go
7:57 into what's future because it kind of is
7:59 but really what what's next for for
8:01 Scenic is 800 ZR kind of the endg game
8:03 or or where do you go from here no so
8:05 it's definitely like the end game I mean
8:06 we're looking forward to the next
8:08 generation which I believe is 1.6
8:09 terabytes per second So we're we're
8:12 trying to get ready for those things but
8:13 as you everyone knows right now is AI is
8:16 the hot topic and the buzzword for
8:18 everything So we want to be able to
8:20 support the AI workflows that our
8:22 members need and also help um
8:26 democratize access to kind of the AI
8:28 resources So we've done things like
8:30 scenic air where now education and
8:33 research are able to use compute GPUs
8:36 that are that are donated across the
8:37 scenic network But you know then also we
8:41 want to be able to make sure that they
8:42 have the network they need not just in
8:44 speed but also in the features and the
8:47 technology to make it as easy as is for
8:50 them to adopt it and for the researchers
8:52 to focus on doing research and the
8:54 educators to focus on educating the next
8:56 generation on topics like AI as opposed
8:59 to figuring out how to get the bandwidth
9:02 to be able to do AI Yeah Excellent Well
9:05 congratulations on the deployment and
9:06 and for your involvement as well We'll
9:08 look look forward to catching up in the
9:10 future All right Thank you Thank you
9:11 Thank you Sterling
9:13 [Music]