here - Dennis Ponne

maidtweetNetworking and Communications

Oct 29, 2013 (4 years and 2 months ago)

256 views

Dennis Ponne

senior network and systems engineer





address:



Binnenhof 10


address:



3911NP Rhenen


date of birth:



29


10


1978

marital status:


divorced

children:



one daughter

nationality:



dutch

phone number:


+31 (0) 6 57 57 89 61

email

address:


dennis@ponne.nu

gender:



male



Personal




Strong analytical skills.



Eager to learn about new technology's.



Strong problem solving skills.



Good communication skills, both oral and written.


Technical




Stro
ngest areas of expertise: designing, implementing, optimizing,
troubleshooting and managing LAN/WAN networks.



Extensive experience with Cisco, Foundry, Force10 and Juniper equipment and
IOS, Brocade OS, FTOS and Junos operating systems.



Specialized knowled
ge of and experience with various protocols in routing and
switching environments.



Broad expert knowledge of IT infrastructures(Unix, Linux, Solaris, Microsoft
servers, Mail servers, DNS servers, Web servers, Caching servers, Load
balancing servers, Virtua
lization servers and remote access solutions.)



Excellent documentation skills.


Schooling, Courses and Exams





MVBO administration completed.



MBO system engineering completed.



HBO system engineering 1,5 years.



CCNA completed.



Juniper Junos completed.



Juni
per SRX completed.



A10 Networks completed.






Employees:


Date begin:


Date End:


Duty description:


Trueserver BV


01


05


2008



still working


Senior
Network & System Engineer

MuntInternet BV

01


12


2005



01


05


2011


Senior Network &
Sy
stem Engineer

Iwax BV


01


04


2006

01


05


2008


IP and server manager
Mobile TV

GarnierProjects BV

01


10


2000

01


12


2005


Network & System Engineer

Trueserver BV


01


01


1998


01


10


2000


Network &
System Engineer

Netgate BV


01


01



1997

01


01


1998


Software & System Engineer



Pre 1997 company's:


Unisource BV








SunOS firewall engineer

Cistron BV








Linux engineer

NMI delft








Novell Netware engineer


Hardware




Cisco Routers.



Cisco Switches.



Cisco PIX and ASA s
ecurity products.



Juniper Routers.



Juniper Switches.



Juniper firewalls.



Foundry Routers.



Foundry Switches.



Force10 Switches / Routers.



Extreme networks Switches / Routers.



HP, 3com, Netgear and other vendor Switches.



Netscreen, Sonicwall, Draytek firewall
/ small router devices.



Dell, HP, Supermicro servers.


Software




Redhat, Ubuntu, Slackware, Debian, CentOS Linux.



Microsofts Windows 3.1 upto server 2008.



Solaris and Opensolaris.



HP
-
UX



Junos



IOS



FTOS



Brocade
-
OS



OS specific: Varnish, Haproxy, IIS, Apache,
Lighthttpd, PHP, Java, Tomcat, Mysql,
Oracle, msSQL, Asterisk (voip) and many many others.





Project Trueserver BV first build up hosting network, design and management.


Time period: 1998


2000


Used hardware:




Cisco 7206 VXR backbone routing



Juniper M
20 backbone routing



Extreme networks backbone switches



3com access switches


Used protocols:




BGP4



OSPF



RIP



802.q1 vlanning



STP, RSTP, BPDU



ATM E1, E3


Responsibility's:




Design from the ground up



Maintenance



Monitoring



Billing


Just a simple setup with in

the beginning a cisco 7206 VXR connected to the Amsterdam
Internet Exchange and two BGP4 transit providers, providing customers fast routes to the
internet for upto 40 racks in the Telecity 2 datacenter. After a while replaced the Cisco
with a brand new J
uniper M20 for routing and basic firewalling.


All devices where monitored by a package called big brother and every connection was
monitored by MRTG. The network design was very simple with one router and different
backbone switches that connected the 3co
m access switches with vlanning.


Reference Vincent Houwert,
vincent@true.nl















Project GarnierProjects BV & Netholding Amsterdam area streaming / media Network build
up, design and management.


Time period
: 2000


2005


Used hardware:




Juniper M40's backbone routing



Foundry BigIron backbone switches



HP and 3com access switches



Extreme networks switches



Cisco switches


Used protocols:




BGP4



OSPF



MPLS



RIP



ISIS



STP, RSTP, BPDU



VRRP



Trunking



802.q1


Responsibi
lity's:




Design from the ground up



Maintenance



Monitoring



Billing


Set from the ground up a redundant network with two M40 routers with multiple Amsterdam
Internet Exchange, Private Peer (UPC, ZIGGO e.g.) and transit connections divided over two
datacenter
s with also a Juniper M20 in London connected by MPLS to the Amsterdam network
providing a connection the London Internet Exchange, every datacenter had it's own foundry
bigiron setup for customer connections and the datacenters where connected by a ring b
ased
on Cisco Switches for layer 2 vlan services. The two core datacenters connected a third
datacenter on the ring providing redundancy and fail over situations for customers.


The connections where monitored by a self written system that could send a SM
S if any port
or change happened it the network. All customer connections where monitored by MRTG. The
primary datacenter connected about 80 racks and provided IP to customers like Akamai,
Radio538, Telegraaf, BBNed e.g.


After a while the network was push
ing at that period more then 12 gigabit/s and had grown
with an extra router at the third datacenter connecting more private peers.


Reference Raymond Garnier,
raymond@garnierprojects.com



Project Muntint
ernet BV first build up hosting network, design and management.


Time period: 2005


2008


Used hardware:




Juniper M20 backbone routing



Juniper MX960 backbone routing



Foundry MLX / XMR backbone routing



HP access and core switching



Foundry and Aces load bal
ancing L7 switches


Used protocols:




BGP4



OSPF



ISIS



OSPFv3



VRRP



STP, RSTP, BPDU



802.q1



IPV6



VRRP
-
E


Responsibility's:




Design from the ground up



Maintenance



Monitoring



Billing


Setup an IP network for high quality hosting customers like Telegraaf Media, Ti
baco and
other international party's. The design was a simple dual router setup in the beginning
with two Juniper M20 routers and soon upgraded to a Juniper M960 router and a Foundry MLX
router completing the network with a 10 gigabit fiber ring in Amsterd
am and all backbone
connections on 10 gigabit ethernet, including the connections towards the transit
providers and the Amsterdam Internet Exchange.


The network when I left was a complete ipv4 and ipv6 integraded network, running VRRP
-
E
between the MLX an
d the MX960 for customer redundancy. Monitoring and billing was done by
RRD and Nagios.


From 2008 upto late 2011 I have helped the company in my spare time to maintain the
backbone, after a while the other technical person in Muntinternet learned all the
re was
to know based on my knowledge to keep the network stable and clean in the configuration.


Reference Jaap Harmsma,
jaap@muntinternet.nl





Project Trueserver BV, rebuilding the core network, add some secur
ity services, build up
an network for the Highlander virtualization platform.


Time period: 2008


2013


Used hardware:




Juniper MX240 backbone routing



Force10 backbone switching



Juniper SRX security



Juniper EX switching



HP access, Cisco and Extreme Access

Switches


Used protocols:




BGP4



OSPF



OSPFv3



VRRP



STP, RSTP, BPDU



802.q1



IPV6



Trunking / Bonding


Responsibility's:




Redesign the network



Maintenance



Extensive Monitoring with dDOS detection


When hired by Trueserver from mid 2008 I was going to help out m
y friend to fill in a job
position as a system engineer, it soon came clear to me that the equipment they used for
their network that was extending from Amsterdam, Toronto, New York, London and Düsseldorf
was experiencing stability problems and that they h
ad an uptime in 2007 and half of 2008
for only about 91 percent. In provider language that was totally unacceptable. The problem
was their force10 equipment, they had created a backbone using an all in one device. Those
Force10 devices where running the ne
twork from Amsterdam connecting all there customers
over three datacenters.


When one of the devices failed or had a routing issue, the complete network was failing.
So after a while looking into this problem I came to the conclusion that the Force10
devi
ces where inadequate to perform and give the stability that comes with a professional
hosting company.


Also took a good look at there fiber network and the need of all the connections to the
different company's, and made a design plan with also shutting d
own one of the datacenters
as a part of the core network. Also I decided that giving stability was a matter of making
the backbone layered again.






In the new setup two MX240's from Juniper are doing the routing to the internet with BGP4
and OSPF in be
tween, creating a fully redundant BGP4 core that can be updated without any
interference for the customers. The two MX240's are connected to the Force10 routers with
a square and X topology making it fully redundant, sending only a null route towards the
f
orce10's and in between OSPF routing created a fast fail over between the devices,
connecting each customer with VRRP to two independent Force10 device.


The complete fiber optics where removed and all external connections where drawn back to
the Amsterdam

Area making it cost effective, better controllable and stable with uptimes
for 2010, 2011 and 2012 of 100 percent for customers connections! And also saved
Trueserver a big amount of monthly recurring costs.


The Amsterdam fiberring is setup with Juniper

EX switch devices, making it a fully
redundant ring connecting the three datacenters. The same Juniper EX switch has been used
to setup a fully redundant cluster for the Highlander virtualization technology. The
technology used is a fully routed EX4200 cl
uster from five switches, connecting over 96
servers that pair up. Every server is connected to four of those switches, if a connection
fails or a switch fails, there is no customer impact not even packet loss. Also did the
complete network configuration o
n the linux hosts for bonding and failover purposes.


The complete network is monitored by SFLOW and NetFLOW on which we can see almost
instantly if anomalousness traffic is being send, an own written monitoring system in PHP
parses the data and sends the

NOC alerts about high traffic or none traffic. The complete
network is being monitored by SNMP and if an interface fails or a port has high rates of
errors, this also is being send to the monitoring system giving the NOC an alert. All
network devices are
being backed up by an automated system every 5 minutes and keeps
historical data and changes, also when a change on the system is done a network alert is
being send so it can be reviewed by the NOC.


Since 2011 Trueserver came up with a new idea to exten
d there customer base. Since I have
been on different projects maintaining Cisco ASA, Cisco PIX and other security devices, we
came up with a plan to use one vendor for customer firewalling and VPN connections. Since
that decision I have configured a lot
of those devices also in high availability cluster
configurations and multiple office connections over IPSEC vpn's connecting customers safe
to there own hardware in the Trueserver datacenters and make safe communications possible
between offices.


Referen
ce Vincent Houwert,
vincent@true.nl


















A few projects for my employers.


Case: DHL (Deutsche post) .


DHL has a problem with 15 offices from there 200+ offices in the Netherlands with
speed and stabilit
y to there hosted platform at Trueserver. All offices are
connected by KPN called epacity VPN.


The customer DHL called in that they had major issues with some of the connected
offices and after trying for months to solve this problem with KPN they called
us to
see if we could do something about it, since Trueserver hosted and maintained the
platform to whom all the offices connect to.


After carefully reviewing all the primary firewalling systems and the gateway
connections in the datacenter, and made some

notes about the incomming packets after
monitoring the epacity connection in the datacenter, I made a visit to one of these
offices.


The problem was clear, downloads didn't exceed 5kb/sec and stalled after 5 to 10
minutes, making it an unworkable situati
on.


After monitoring the connection at one of these failing offices with wireshark and
ethereal I came to the conclusion that somewhere in the network of KPN the MTU was
misconfigured breaking all the packets. Since the gateway at the datacenter had no
s
upport for framing sized tcp windows we had to lower the MTU at the customer side.
After doing that, none of the 15 locations reported any more error.


Case: KPMG Achmea.


In this case I was called in to provide a second opinion about a failing datacenter
and the fail over setup that didn't work as it supposed to be.


The datacenter was equiped with Cisco 79xx switching / routing equipment and several
CWDM/DWDM fiber paths, to create redundancy.


After looking into the case it became clear that lack of main
tenance and
misconfiguration with the use of old legacy protocols lead to the failing fail over
between the two datacenters.



And a lot of other cases involving load balanced clusters for VNUMedia, tomcat java apache
servers for the dutch bank ING, stre
aming platform for Radio538 and other video and music
stations, advice and design for several other ip networks like 2fast internet services,
Euroaccess, Netrouting, Proserve, fiber project towards circuit park zandvoort delivering
a huge internet connecti
on for racing company's and the local infrastructure at the park
itself and many many other small and nice projects involving networking and system
management, build up, design, redesign and troubleshooting


If you have any other questions please do not he
sitate to call or email me.