Como limpar o filtro de ar da Yamaha DT 180

IMG_20150723_192343 IMG_20150723_192437 IMG_20150723_192455IMG_20150723_192510IMG_20150723_192522

É bem simples. Basta remover os 4 parafusos da tampa para ter acesso ao filtro. Depois remova os outros 4 parafusos que seguram o suporte da espuma, que faz o papel de filtrar o ar. Pronto, agora limpe todas as peças e a parte interna da cavidade do filtro de ar. A espuma você deve limpá-la com tinner/aguarrás, que toda a sujeira misturada com óleo sai facilmente. Depois monte tudo no lugar. As fotos abaixo detalham todo o processo.

 

Howto VPN L2TP Pre-Shared Key

Tested on: CentOS 6.6
Tools used: Strongswan (https://www.strongswan.org/) for IPSec tunnel, Xl2tpd (https://www.xelerance.com/services/software/xl2tpd/) for Layer 2 Tunneling Protocol (L2TP) daemon and ppp.

Pre-requisites:

[root@centos02 ~]# yum install epel-release
[root@centos02 ~]# yum install strongswan ppp xl2tpd

Part 1: Configure Strongswan

Edit the following files:

[root@centos02 ~]# vi /etc/strongswan/ipsec.conf
# ipsec.conf – strongSwan IPsec configuration file
config setup
        strictcrlpolicy=no
        #charondebug=”ike 4, knl 4, cfg 2″    #useful debugs
conn %default
        ikelifetime=1440m
        keylife=60m
        rekeymargin=3m
        keyingtries=1
        keyexchange=ikev1
        authby=xauthpsk
conn L2TP-PSK-CLIENT
        keyexchange=ikev1
        type=transport
        authby=secret
        ike=3des-sha1-modp1024
        rekey=no
        left=%defaultroute
        leftprotoport=udp/l2tp
        right=134.142.135.72        # IP of your VPN Server
        rightprotoport=udp/l2tp
        auto=add

Add your pre-shared key here:

[root@centos02 ~]# vi /etc/strongswan/ipsec.secrets
# /etc/ipsec.secrets – strongSwan IPsec secrets file
: PSK “minhapresharedkey”                 # Pre-Shared Key

Set strongswan to start on boot:
[root@centos02 ~]# chkconfig strongswan on

Start strongswan service:
[root@centos02 ~]# /etc/init.d/strongswan start

Try the ipsec:
[root@centos02 ~]# strongswan up L2TP-PSK-CLIENT

If you get the line below, your IPSec tunnel is working:
connection ‘L2TP-PSK-CLIENT’ established successfully

To shutdown the IPSec tunnel, run:
[root@centos02 ~]# strongswan down L2TP-PSK-CLIENT

Part 2: Configure Xl2tpd

Edite the config file:

[root@centos02 ~]# vi /etc/xl2tpd/xl2tpd.conf
[global]
force userspace = yes
;debug tunnel = yes
; Connect as a client to a server at 134.142.135.72
[lac L2TPserver]
lns = 134.142.135.72
require chap = yes
refuse pap = yes
require authentication = yes
; Name should be the same as the username in the PPP authentication!
name = gustfn
;ppp debug = yes
pppoptfile = /etc/ppp/options.l2tpd.client
length bit = yes

And this one:

[root@centos02 ~]# vi /etc/ppp/options.l2tpd.client
ipcp-accept-local
ipcp-accept-remote
refuse-eap
noccp
noauth
crtscts
idle 1800
mtu 1200
mru 1200
nodefaultroute
lock
connect-delay 5000
require-mppe

name gustfn
password MinhaSenhaDaVpn

Add here your user and password for VPN:

[root@centos01 ~]# vi /etc/ppp/chap-secrets
# Secrets for authentication using CHAP
# client    server    secret            IP addresses
gustfn        *        MinhaSenhaDaVpn

To make things easier, let’s create two scripts: one for up the VPN and other for down the VPN:

[root@centos02 ~]# vi vpn_up.sh
#!/bin/sh
# Create a tunnel IPSec with Pre-Shared Key
strongswan up L2TP-PSK-CLIENT | grep “established successfully”
# start the ppp connection and autenticate with your user/pass
echo “c L2TPserver” > /var/run/xl2tpd/l2tp-control
sleep 5
# Important: You need to add here the routes of your VPN network
route add -net 10.24.48.0 netmask 255.255.255.0 dev ppp0
# And delete this one manually
route del 134.142.135.72

[root@centos02 ~]# vi vpn_down.sh
#!/bin/sh
echo “d L2TPserver” > /var/run/xl2tpd/l2tp-control
strongswan down L2TP-PSK-CLIENT

And make both scripts executable:
root@centos02 ~]# chmod +x vpn_up.sh
root@centos02 ~]# chmod +x vpn_down.sh

Set xl2tpd to start on boot:
[root@centos02 ~]# chkconfig xl2tpd on

Start the Xl2tpd daemon:
[root@centos02 ~]# /etc/init.d/xl2tpd start

Done! Now let’s try to check if the VPN is working!
[root@centos02 ~]# ./vpn_up.sh
connection ‘L2TP-PSK-CLIENT’ established successfully

Great!! Now I’m just trying to ping an IP from the other side:
[root@centos02 ~]# ping 10.24.48.52
PING 10.24.48.52 (10.24.48.52) 56(84) bytes of data.
64 bytes from 10.24.48.52: icmp_seq=1 ttl=63 time=232 ms
64 bytes from 10.24.48.52: icmp_seq=2 ttl=63 time=181 ms
64 bytes from 10.24.48.52: icmp_seq=3 ttl=63 time=197 ms

The VPN L2TP is working, good job! To shutdown the VPN, just run:
[root@centos02 ~]# ./vpn_down.sh

How to build a simple affiliate API using OpenResty

Hello,

In this post I’ll discuss about how to create a simple API to provide simple access to a common affiliate partner program. The idea is to have a URL that accepts a few arguments (in our case, we will take the args called partner, subid, gender and route.

To do that, we’ll use the powerful nGinx-with-steroids called OpenResty. My compliments to Yichun Zhang (agentzh), the head of the project.

First of all, download and unpack the OpenResty. (I’ll suppress some simple steps in this doc.)

 

wget http://openresty.org/download/ngx_openresty-1.7.10.1.tar.gz
tar zxvfp ngx_openresty-1.7.10.1.tar.gz
cd ngx_openresty-1.7.10.1
 ./configure --prefix=/usr/local/openresty --with-http_postgres_module
gmake -j4
gmake install
vi /usr/local/openresty/nginx/conf/nginx.conf

# Content of file nginx.conf
worker_processes 4;
events {}
error_log logs/error.log debug;

http {
 upstream database {
 postgres_server 192.168.0.10:5432 dbname=mydatabase user=postgres password=mypassword123;
 }
 
 server {
 listen 192.168.0.11:8080;
 server_name localhost;
 root /usr/local/openresty/nginx/html;

 location /postgresquery {
 internal;
 postgres_pass database;
 set_unescape_uri $id $arg_id;
 set_unescape_uri $subid $arg_subid;
 postgres_escape $id;
 postgres_escape $subid;
 postgres_escape $referencia $http_referer;

 postgres_query
 GET "INSERT INTO mytable01 (id, subid, referer) VALUES ($id, $subid, $referencia) 
RETURNING clickid";
 postgres_output value;
 postgres_rewrite changes 200;
 }

 location /campaign {
 content_by_lua ' 
 local res = ngx.location.capture("/postgresquery", { args = { id = ngx.var.arg_id, 
subid = ngx.var.arg_subid, http_referer = ngx.var.http_referer } } )
 if res.status == 200 and res.body then

 local cookie_name_click = "COOKIE_CLICK="
 local cookie_value_click = res.body
 local cookie_click = cookie_name_click .. cookie_value_click

 local cookie_name_id = "COOKIE_ID="
 local cookie_value_id = ngx.var.arg_id
 local cookie_id = cookie_name_id .. cookie_value_id

 if ngx.var.arg_subid then

 local cookie_name_subid = "COOKIE_SUBID="
 local cookie_value_subid = ngx.var.arg_subid
 local cookie_subid = cookie_name_subid .. cookie_value_subid

 if ngx.var.arg_gender then

 local cookie_name_gender = "COOKIE_GENDER=" 
 local cookie_value_gender = ngx.var.arg_gender
 local cookie_gender = cookie_name_gender .. cookie_value_gender

 ngx.header["Set-Cookie"] = {cookie_click, cookie_id, cookie_subid, cookie_gender}
 else
 ngx.header["Set-Cookie"] = {cookie_click, cookie_id, cookie_subid}
 end
 else
 if ngx.var.arg_gender then

 local cookie_name_gender = "COOKIE_GENDER="
 local cookie_value_gender = ngx.var.arg_gender
 local cookie_gender = cookie_name_gender .. cookie_value_gender

 ngx.header["Set-Cookie"] = {cookie_click, cookie_id, cookie_gender}
 else
 ngx.header["Set-Cookie"] = {cookie_click, cookie_id}
 end
 end

 if ngx.var.arg_route == "photos" then
 return ngx.redirect("http://mywebsite.priv/photos")
 elseif ngx.var.arg_route == "videos" then
 return ngx.redirect("http://mywebsite.priv/videos")
 else
 return ngx.redirect("http://mywebsite.priv/")
 end
 end
 ';
 }

 }
}
# End of file nginx.conf

I’ll explain what this webservice does. It accept an URL of this kind:

http://192.168.0.11:8080/campaign?id=123&subid=5&gender=3&route=videos
http://192.168.0.11:8080/campaign?id=123&subid=5&gender=3
http://192.168.0.11:8080/campaign?id=123&subid=5
http://192.168.0.11:8080/campaign?id=123

Where 192.168.0.11 is the IP of my OpenResty server running on port 8080.

When this URL reaches the server, the nGinx Lua capture the request and make an internal request, to /postgresquery, passing the arguments id, subid and http_referer. The location /postgresquery, that can be accessed only internally, treats the arguments and makes the insert in the database. And return as output the result of column named clickid.

Next step is to check if the response of the request to /postgresquery has returned 200 (if res.status == 200) and if there is any value returned by the request (res.value). So the program creates two cookies, called COOKIE_CLICK and COOKIE_ID, which contains the number of the click of the origin request, inserted on our table called mytable01, and the ID of the partner, that was given by query string argument $id.

Following we check if there are two opcional arguments called subid and gender and if exists, create the proper cookies.

At last the check if there is an argument called route and redirect the user to the proper location on our website, after processed the origin click from a partner website.

The next image shows the whole process.

Screen Shot 2015-03-10 at 16.44.59

Why the choose to do that on OpenResty? Because this process is very fast and this is important when you are dealing with a high number of requests concurrently. The user will not be noticed of this process and your server will be grateful for little-used resources.

Factory Reset of Apache CloudStack 4.x on CentOS/RH

Just do it.

/etc/init.d/cloudstack-management stop
mysql -e ‘drop database cloud’
mysql -e ‘drop database cloud_usage’
cloudstack-setup-databases cloud:password@localhost –deploy-as=root
cloudstack-setup-management
/etc/init.d/cloudstack-management start

Exemplo de vlan tagging no Centos6/7

[root@cloudstack01 network-scripts]# cat ifcfg-eth0:1

VLAN=yes

TYPE=Vlan

DEVICE=eth0:1

PHYSDEV=eth0

VLAN_ID=1

REORDER_HDR=0

BOOTPROTO=none

DNS1=192.168.10.1

DEFROUTE=no

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_FAILURE_FATAL=no

NAME=eth0:1

ONBOOT=yes

IPADDR=10.0.0.21

PREFIX=24

IPV6_PEERDNS=yes

IPV6_PEERROUTES=yes

Ensuring the maximum performance of your CPU on XenServer environment.

I’ll be straight forward on this article. If you want to ensure your application respond quicker than possible, one of (many) things you need to be sure is if you CPU is running at the maximum clock possible.

Today almost all CPUs have a lot of ‘states’, created basically to save energy lowering the clock (and the current) of the CPU. The CPU rises the clock just only it detect a significant load/demand over it.

If you have a XenServer, you can change the governor algorithm from the default ‘on demand’ to ‘performance’. Let’s see an example below. On the Dom-0, run the command:

# xenpm get-cpufreq-states
cpu id : 1
total P-states : 16
usable P-states : 16
current frequency : 1200 MHz
P0 : freq [2901 MHz]
P1 : freq [2900 MHz]
P2 : freq [2800 MHz]
P3 : freq [2700 MHz]
P4 : freq [2500 MHz]
P5 : freq [2400 MHz]
P6 : freq [2300 MHz]
P7 : freq [2200 MHz]
P8 : freq [2000 MHz]
P9 : freq [1900 MHz]
P10 : freq [1800 MHz]
P11 : freq [1700 MHz]
P12 : freq [1600 MHz]
P13 : freq [1400 MHz]
P14 : freq [1300 MHz]
*P15 : freq [1200 MHz]
I suppressed a lot of lines just to focus where is important. As we can see, the CPU1 (core) is running on 1200 Mhz, but the highest clock speed possible is 2901 Mhz (P0 state). Let’s see the clock of all cores:

# xenpm get-cpufreq-states | grep current
current frequency : 1200 MHz
current frequency : 1200 MHz
current frequency : 1200 MHz
current frequency : 1300 MHz
current frequency : 1200 MHz
current frequency : 1200 MHz
current frequency : 1200 MHz
current frequency : 2901 MHz

Most of the cores are running at the lowest speed, affecting the performance of all virtual machines and services running over it. To change the value to the maximum clock speed of all cores at same time, simple run the following command:

# xenpm set-scaling-governor performance
And to make this persistent to next reboot:

# /opt/xensource/libexec/xen-cmdline –set-xen cpufreq=xen:performance
Let’s check the clock of all cores again.

# xenpm get-cpufreq-states|grep current
current frequency : 2901 MHz
current frequency : 2901 MHz
current frequency : 2901 MHz
current frequency : 2901 MHz
current frequency : 2901 MHz
current frequency : 2901 MHz
current frequency : 2901 MHz
current frequency : 2901 MHz

Now we are running at top speed! Usually you can see an immediate benefit on database servers. Be aware of changing this characteristic your server will consume more energy, will produce more noise, will rise the temperature and will demand more from air conditioning. But if you host your server on a good datacenter you do not have to worry about it. 🙂

First published on https://www.linkedin.com/pulse/ensuring-maximum-performance-your-cpu-xenserver-n%C3%B3brega

Ten Rules for Web Startups

If you think you have a good idea and want to open a business, please read this text. Keep in mind these few fundamental characteristics when you started to plan and build your product and company.


 

#1: Be Narrow
Focus on the smallest possible problem you could solve that would potentially be useful. Most companies start out trying to do too many things, which makes life difficult and turns you into a me-too. Focusing on a small niche has so many advantages: With much less work, you can be the best at what you do. Small things, like a microscopic world, almost always turn out to be bigger than you think when you zoom in. You can much more easily position and market yourself when more focused. And when it comes to partnering, or being acquired, there’s less chance for conflict. This is all so logical and, yet, there’s a resistance to focusing. I think it comes from a fear of being trivial. Just remember: If you get to be #1 in your category, but your category is too small, then you can broaden your scope—and you can do so with leverage.

#2: Be Different
Ideas are in the air. There are lots of people thinking about—and probably working on—the same thing you are. And one of them is Google. Deal with it. How? First of all, realize that no sufficiently interesting space will be limited to one player. In a sense, competition actually is good—especially to legitimize new markets. Second, see #1—the specialist will almost always kick the generalist’s ass. Third, consider doing something that’s not so cutting edge. Many highly successful companies—the aforementioned big G being one—have thrived by taking on areas that everyone thought were done and redoing them right. Also? Get a good, non-generic name. Easier said than done, granted. But the most common mistake in naming is trying to be too descriptive, which leads to lots of hard-to-distinguish names. How many blogging companies have “blog” in their name, RSS companies “feed,” or podcasting companies “pod” or “cast”? Rarely are they the ones that stand out.

#3: Be Casual
We’re moving into what I call the era of the “Casual Web” (and casual content creation). This is much bigger than the hobbyist web or the professional web. Why? Because people have lives. And now, people with lives also have broadband. If you want to hit the really big home runs, create services that fit in with—and, indeed, help—people’s everyday lives without requiring lots of commitment or identity change. Flickr enables personal publishing among millions of folks who would never consider themselves personal publishers—they’re just sharing pictures with friends and family, a casual activity. Casual games are huge. Skype enables casual conversations.

#4: Be Picky
Another perennial business rule, and it applies to everything you do: features, employees, investors, partners, press opportunities. Startups are often too eager to accept people or ideas into their world. You can almost always afford to wait if something doesn’t feel just right, and false negatives are usually better than false positives. One of Google’s biggest strengths—and sources of frustration for outsiders—was their willingness to say no to opportunities, easy money, potential employees, and deals.

#5: Be User-Centric
User experience is everything. It always has been, but it’s still undervalued and under-invested in. If you don’t know user-centered design, study it. Hire people who know it. Obsess over it. Live and breathe it. Get your whole company on board. Better to iterate a hundred times to get the right feature right than to add a hundred more. The point of Ajax is that it can make a site more responsive, not that it’s sexy. Tags can make things easier to find and classify, but maybe not in your application. The point of an API is so developers can add value for users, not to impress the geeks. Don’t get sidetracked by technologies or the blog-worthiness of your next feature. Always focus on the user and all will be well.

#6: Be Self-Centered
Great products almost always come from someone scratching their own itch. Create something you want to exist in the world. Be a user of your own product. Hire people who are users of your product. Make it better based on your own desires. (But don’t trick yourself into thinking you areyour user, when it comes to usability.) Another aspect of this is to not get seduced into doing deals with big companies at the expense or your users or at the expense of making your product better. When you’re small and they’re big, it’s hard to say no, but see #4.

#7: Be Greedy
It’s always good to have options. One of the best ways to do that is to have income. While it’s true that traffic is now again actually worth something, the give-everything-away-and-make-it-up-on-volume strategy stamps an expiration date on your company’s ass. In other words, design something to charge for into your product and start taking money within 6 months (and do it with PayPal). Done right, charging money can actually accelerate growth, not impede it, because then you have something to fuel marketing costs with. More importantly, having money coming in the door puts you in a much more powerful position when it comes to your next round of funding or acquisition talks. In fact, consider whether you need to have a free version at all. The TypePadapproach—taking the high-end position in the market—makes for a great business model in the right market. Less support. Less scalability concerns. Less abuse. And much higher margins.

#8: Be Tiny
It’s standard web startup wisdom by now that with the substantially lower costs to starting something on the web, the difficulty of IPOs, and the willingness of the big guys to shell out for small teams doing innovative stuff, the most likely end game if you’re successful is acquisition. Acquisitions are much easier if they’re small. And small acquisitions are possible if valuations are kept low from the get go. And keeping valuations low is possible because it doesn’t cost much to start something anymore (especially if you keep the scope narrow). Besides the obvious techniques, one way to do this is to use turnkey services to lower your overhead—Administaff,ServerBeach, web apps, maybe even Elance.

#9: Be Agile
You know that old saw about a plane flying from California to Hawaii being off course 99% of the time—but constantly correcting? The same is true of successful startups—except they may start out heading toward Alaska. Many dot-com bubble companies that died could have eventually been successful had they been able to adjust and change their plans instead of running as fast as they could until they burned out, based on their initial assumptions. Pyra was started to build a project-management app, not Blogger. Flickr’s company was building a game. Ebay was going to sell auction software. Initial assumptions are almost always wrong. That’s why the waterfall approach to building software is obsolete in favor agile techniques. The same philosophy should be applied to building a company.

#10: Be Balanced
What is a startup without bleary-eyed, junk-food-fueled, balls-to-the-wall days and sleepless, caffeine-fueled, relationship-stressing nights? Answer?: A lot more enjoyable place to work. Yes, high levels of commitment are crucial. And yes, crunch times come and sometimes require an inordinate, painful, apologies-to-the-SO amount of work. But it can’t be all the time. Nature requires balance for health—as do the bodies and minds who work for you and, without which, your company will be worthless. There is no better way to maintain balance and lower your stress that I’ve found than David Allen’s GTD process. Learn it. Live it. Make it a part of your company, and you’ll have a secret weapon.

#11 (bonus!): Be Wary
Overgeneralized lists of business “rules” are not to be taken too literally. There are exceptions to everything.

 

Source: http://evhead.com/2005/11/ten-rules-for-web-startups.asp

Centos 7, systemd e interface virtual

Powered by CentOSQuem começou a usar o Centos 7 com certeza notou como as coisas estão diferentes. Eu estou acostumado a instalar e configurar servidores com Centos desde a versão 4, em março de 2005, e o uso em ambiente de produção até hoje. Mas ao pular da versão 6.6 para a 7.0 realmente fiquei surpreso de como as coisas evoluíram (apesar de alguns não pensarem assim).

Algo comum que se faz em um servidor é a criação de interfaces virtuais. No Centos 6.6 (e anteriores), com o kernel 2.6.32, as interfaces de rede são chamadas de eth0, eth1, eth2, etc. Na versão 7 do sistema operacional, com kernel 3.10.0, elas são chamadas enp3s0. Algo diferente nessa nova versão é que não existem mais os usuais scripts no /etc/init.d. Eles foram substituídos pelo systemd. Então, chamar o /etc/init.d/network restart ou service network restart não irá funcionar mais.

Mas vamos lá, criar uma inteface virtual. Os arquivos de configuração das interfaces continuam no mesmo diretório (ufa!), o /etc/sysconfig/network-scripts. Vou listar aqui dois arquivos, o ifcfg-enp3s0 e o ifcfg-enp3s0.1. O primeiro arquivo é a configuração de rede da interface física enp3s0 e o segundo é uma interface de rede virtual sobre a enp3s0.

# cat ifcfg-enp3s0
TYPE=Ethernet
BOOTPROTO=none
DNS1=192.168.10.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=enp3s0
UUID=703bf06c-50e7-419e-b595-9ade66c40e13
ONBOOT=yes
HWADDR=F0:4D:A2:DF:8C:E9
IPADDR=192.168.10.20
PREFIX=24
GATEWAY=192.168.10.1
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
# cat ifcfg-enp3s0.1 
VLAN=yes
TYPE=Vlan
DEVICE=enp3s0.1
PHYSDEV=enp3s0
VLAN_ID=1
REORDER_HDR=0
BOOTPROTO=none
DNS1=192.168.10.1
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=enp3s0.1
ONBOOT=yes
IPADDR=10.0.0.11
PREFIX=24
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes

Pronto, algumas coisas estão diferentes, mas está bem claro o entendimento do que é cada parâmetro.

Agora, para reconfigurar a rede e passar a valer as configurações, execute o seguinte comando:

# systemctl restart network.service

E com o ifconfig, você poderá notar que as interfaces estão de pé.

# ifconfig 
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet 192.168.10.20 netmask 255.255.255.0 broadcast 192.168.10.255
 inet6 fe80::f24d:a2ff:fedf:8ce9 prefixlen 64 scopeid 0x20<link>
 ether f0:4d:a2:df:8c:e9 txqueuelen 1000 (Ethernet)
 RX packets 1908 bytes 207132 (202.2 KiB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 593 bytes 115909 (113.1 KiB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 device interrupt 18
enp3s0.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet 10.0.0.11 netmask 255.255.255.0 broadcast 10.0.0.255
 inet6 fe80::f24d:a2ff:fedf:8ce9 prefixlen 64 scopeid 0x20<link>
 ether f0:4d:a2:df:8c:e9 txqueuelen 0 (Ethernet)
 RX packets 0 bytes 0 (0.0 B)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 11 bytes 1752 (1.7 KiB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Dica de performance para máquina virtual com várias vCPUs e que exige alto IO

Um problema comum que você pode encontrar quando utilizar máquinas virtuais, seja em cloud computing como Amazon ou em ambiente virtualizado privado, como XenServer ou vmWare, é gargalo de IO (Input/Output).

Isto ocorre basicamente porque todo o IO (isto é, operação de leitura/gravação de disco, memória, rede, etc) é processado exclusivamente pela vCPU0, a primeira vCPU da máquina virtual, pois a camada de virtualização não permite você fazer um IRQ balancing com as demais vCPUs.

Um exemplo. Uma VM, com 16 vCPUs e 64GB de RAM, rodando um banco de dados com alta carga, pode gargalar rapidamente e ter uma brusca queda de performance devido à sobrecarga da vCPU0. Usando o comando top você verá a vCPU0 com 0% idle (e a maioria do processamento em system).

Uma dica para contornar esse problema é isolar a vCPU0 de qualquer aplicação que rode no ambiente. No exemplo anterior, se o banco de dados em execução fosse um PostgreSQL, basta você criar um script conforme descrito abaixo (aqui chamado de cpu_affinity.sh)

#!/bin/bash
for i in `ps auxw | grep postgres | awk ' { print $2; } '`; do echo $i; taskset -pc 1-15 $i; done

e agendar ele para rodar todo minuto no Cron.

Este script simplesmente pega o PID de todos os processos do postgres e diz que eles podem rodar em todas as vCPUs de 1 a 15 (não inclui a vCPU0), desta forma deixando ela isolada apenas para processamento de IO.

Isto reduz a carga e elimina gargalos de IO em momentos de alta carga.

How I reduced data center costs by 60%, increasing capacity by 100%

The quick answer: I moved my website from a big data center in the USA to iWeb, and virtualized every server with XenServer. Here’s how my team and the iWeb sysadmins did it.

I work at eSapiens Internet, a small IT company that runs a social network in Brazil. Today we have around 500,000 unique visitors per day, and 250,000,000 page views per month. Keeping all our web servers and databases servers running well together is not an easy task.

Before October 2013, we had more than 30 dedicated servers connected by a 1Gbps network and a few shared iSCSI storage volumes. All these servers ran CentOS with nGinx above it. Backend web servers ran php-fpm and backend database servers PostgreSQL.

All services were very well configured and we were getting the optimal performance that a dedicated server could provide. But we still thought this might not be the best solution we could get, so we looked to a new solution, which I’ll describe now.

A new solution
After some research we decided to virtualize all our dedicated servers using Citrix XenServer. We made the choice by doing some tests between vmWare and XenServer, finding the latter to have the best cost/benefit ratio. On the storage side, we chose DSSv7 from Open-E. It’s a great storage software solution, Linux based and fully compatible with XenServer. It provided us with replicated high availability iSCSI storage volumes and, even better than that, an active-active cluster solution.

This image shows our solution today. As you can see, the solution is a quite simple.

eSapiens_blog_post_01

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Basically we have six application servers (AppServers) running Citrix XenServer 6.2, two storage servers running Open-E DSS v7, a couple of firewalls and a VPN server. Connecting all this hardware we have two 10 Gigabits switches, configured with LACP protocol.

This solution design was a joint effort between teams from eSapiens and iWeb, and the support offered by iWeb was crucial to choosing them over another data center. We discussed the design with iWeb for several days before the project, as well as during the deployment of servers, and without the support of the iWeb team we would not have had the same success. Today we have high availability at so many levels (storage server, xenserver, firewalls, switches) that any hardware failure will not stop us.

Tips for success
Here are some tips we have to share about the configuration of some of ours virtual servers:

Since we have a lot of simultaneous access, we built a PostgreSQL cluster with seven VMs, using one master node and six slaves nodes. To replicate data we used Streaming Replication, which is natively PostgreSQL and works very well, and to balance the requests we use pgPool. Our database VMs are the largest that XenServer support, each one have 128GB RAM and 16 vCPUs.

The biggest challenge we had was IO. Because of the XenServer architecture, only vCPU0 process all IO at the VM level. To get the maximum performance we didn’t add iSCSI storage volumes directly to the VM from the storage server, but rather connected the iSCSI storage volume from storage to XenServer and then to VM. You might think that this would jeopardize performance because we add another element (XenServer) between storage server and VM, but this element has eight CPUs that balance IO thought IRQ and that’s what matters. Another tip is to use the taskset program (in Linux) to set a group of programs, like PostgreSQL process, to use all others vCPUs except vCPU0, reserving this vCPU for IO subsys only.

The results
In terms of results, this project reduced data center costs by 60% (even taking all XenServer and DssV7 licenses in to account) and we estimate that our capacity to serve web pages has increased by at least 2x (maybe 3x) that of the older dedicated structure. This result was only made possible by the great prices offered by iWeb for equipment and the great crew that helped us throughout the project.

 

←Older