Friday, December 5, 2014

Ansible: Generating a SSH pub key file and uploading to other host to sync files from there

Update: check the comments for new workflows

Original from 19/May/2014... updated!

I found this workflow for our systems:

  1. Up the new box.
  2. Generate keys in that new box.
  3. "Fetch" the pub key from the new server to the ansible server.
  4. Copy that key to authorized_keys file of the other server (from ansible server).
  5. Execute a rsync from the new server without asking key to the other server.
My trick is:

Tuesday, November 11, 2014

memcached-zabbix-template monitor

This is a minimal template to get info from your Memcached server from two possible places. Via zabbix-agentd in clients or via externalscripts in zabbix server. Choose your option.

Monitoring information by now:
  • 'bytes', 
  • 'cmd_get', 
  • 'cmd_set', 
  • 'curr_items', 
  • 'curr_connections', 
  • 'limit_maxbytes', 
  • 'uptime', 
  • 'get_hits', 
  • 'get_misses', 

And the special HIT-ratio in %:
  • 'ratio' 

Installation in the Zabbix Server

You should look for the external scripts directory in your Zabbix Server configuration file. In the CentOS 6.5 RPM Zabbix installation is: /usr/lib/zabbix/externalscripts

Copy the python script there. A chmod/chown to get execution permission is necessary.

Now, in your Zabbix frontend: Configuration-Templates section, Import bottom in the right side.

Choose the XML file and import.

Apply this new template to your Memcached servers.

You don't need to modify the template if you are using the standard port to access to the Memcached (port 11211).

It permits a fast configuration because of you can apply the same template to all your memcached servers without modification/installation in the agents.

Of course, it can be to work in the agent/client side too.




You can find it in my repo: https://github.com/vicendominguez/memcached-zabbix-template

:)

Wednesday, October 29, 2014

Getting current TCP connection count on a Linux Server with tshark

Do you have a lot of connections because of a DOS attack? or, perhaps your mysql server has a lot of connection-storms? Do you need to know what is the exact number of those TCP connections?

Ok... there we go!

Install wireshark for terminal in your Linux and later write:

tshark -f 'tcp port 80 and tcp[tcpflags] & (tcp-syn) !=0 and tcp[tcpflags] & (tcp-ack) = 0' -n -q -z io,stat,1 -i eth0 -a "duration:10"

  • "port 80" could be "port 3306" or "port whatever-you-want"
  • "eth0" and "duration:10" can be changed too.

Description:
During 10 seconds tshark is capturing traffic. After that, it will write a report with your connection count each one second (Frames field).

=============================
| IO Statistics             |
|                           |
| Interval size: 1 secs     |
| Col 1: Frames and bytes   |
|---------------------------|
|          |1               |
| Interval | Frames | Bytes |
|---------------------------|
|  0 <>  1 |     10 |   740 |
|  1 <>  2 |    105 |  7770 |
|  2 <>  3 |      1 |    74 |
|  3 <>  4 |      0 |     0 |
|  4 <>  5 |      3 |   222 |
|  5 <>  6 |     85 |  6290 |
|  6 <>  7 |     16 |  1184 |
|  7 <>  8 |     31 |  2294 |
|  8 <>  9 |     72 |  5328 |
|  9 <> 10 |      3 |   222 |
=============================


That's all.


Thursday, October 16, 2014

Fast stats from glassfish server.log

Very fast python script to take a fast look at your glassfish server.log file.

Output:

Fecha Inicio:  [2014-10-13T23:54:54.372+0200]
Fecha Fin:  [2014-10-16T13:46:22.230+0200]
Total INFO:  826
Total WARN:  126
Total SEVERE:  2341
Total ERROR:  96
Total Processing:  3389
Total Exceptions:  0
Total logfile lines:  13646

The script:


Friday, October 3, 2014

Avoiding the "NGINX buffers the request of the body when uploading large files" issue

The problem with NGINX is perfectly commented by David Moreau Simard in his blog post: A use case of Tengine, a drop-in replacement and fork of nginx. The summary is in this paragraph: 
I noticed a problem when using nginx as a load balancer in front of servers that are the target of large and numerous uploads. nginx buffers the request of the body and this is something that drives a lot of discussion in the nginx mailing lists.
This effectively means that the file is uploaded twice. You upload a file to nginx that acts as a reverse proxy/load balancer and nginx waits until the file is finished uploading before sending the file to one of the available backends. The buffer will happen either in memory or to an actual file, depending on configuration.
Tengine was recently brought up in the Ceph mailing lists as part of the solution to tackling the problem so I decided to give it a try and see what kind of impact it’s unbuffered requests had on performance.
There are similar issues in a lot of lists:


I have made a fast "adaptation" of the Yaoweibin's no_buffer patch to the new nginx releases.

Weibin Yao (yaoweibin) is a MOTU working in the tengine project: https://github.com/yaoweibin

Tengine (https://github.com/alibaba/tengine) is a web server originated by Taobao, the largest e-commerce website in Asia. It is based on the Nginx HTTP server and has many advanced features. Tengine has proven to be very stable and efficient on some of the top 100 websites in the world, including taobao.com and tmall.com

At the moment, it is not possible avoid the buffering in the POST requests in NGINX. If you are working uploading large files to a backend, you know i am meaning.

Tengine has a patch (by yaoweibin?) to solve it and it appears as a feature in its webpage: http://tengine.taobao.org/
  • Sends unbuffered upload directly to HTTP and FastCGI backend servers, which saves disk I/Os.
There is a pending ticket to Nginx team requesting that: http://trac.nginx.org/nginx/ticket/251 but there is not ETA: http://forum.nginx.org/read.php?2,253626,253705#msg-253705

Finally, i chose to adapt the Yaoweibin's patches (http://yaoweibin.cn/patches/) to the 1.7.6 nginx version.

For me, it is working perfectly.

A CentOS RPM package is available in  our repo: http://repo.enetres.net/repoview/nginx.html

The new options in the conf file are:
  • client_body_buffers
  • client_body_postpone_size
  • proxy_request_buffering
  • fastcgi_request_buffering
The description of this new options is in this tengine page: http://tengine.taobao.org/document/http_core.html

UPDATE
This patch is not necessary from nginx > 1.7.11

Monday, September 29, 2014

MyDumper 0.6.2 RPM for CentOS 6.5

Hello....my new mydumper RPM package for Centos 6.5 is here.

From the original Changelog:
Bugs Fixed:
  • 1347392 last row of table not dumped if it brings statement over statement_size
  • 1157113 Compilation of latest branch fails on CentOS 6.3 64bit 
  • 1326368 Can't make against Percona-Server-devel-55 headers 
  • 1282862 unknown type name 'HASH'#1336860 k is used twice 
  • 913307 Can't compile - missing libs crypto and ssl 
  • 1364393 rows chunks doesn't increase non innodb jobs- TokuDB support- Support to dump tables from different schemasNew Features:- --lock-all-tables (instead of FTWRL)
Available in our repo:
PS: Not tested yet. Sorry. Any issue in the comments please.
Update: new version of mydumper available.

Wednesday, September 24, 2014

Process monitor template with logger and email notifier (daemonized)

If you are looking for a template to write a fast copy-and-paste script (daemonized) for monitoring one process... you can get some ideas from this:



Saturday, September 6, 2014

Killing old Linksys WAG54* or WAG200* like "Rambo" (temporary)

Two days ago i visited a friend of me. I was showing him how to discover open ports in a network with nmap. As example I chose his home gateway. His gateway is/was a WAG54G from Linksys. Without firmware uptated.  Year 2009, more or less.

He was surprised when we discovered two weird ports in the gateway: 5190 and 5566 ports. He didn't have any port configured to be open but I ignored that because  I thought they were some private ports to configure the gateway from a GUI in Windows or something similar.

We were speaking and connecting to some ports for a while.... and I figured out the behavior of 5566 port was different after connecting to the 5190 port.  It was very odd.

If you connect to the 5566 port you will receive a fast disconnection but if you make a connection to the 5190 port before, the 5566 port is waiting data in a lot of cases. Bye bye to the fast disconnection.

So, for my curiosity i tried to fuzz the port with /dev/urandom.... something similar to:
cat /dev/urandom | nc -v 192.168.1.1 5566
Meanwhile...when I was explaining to my friend about how a protocol works, our internet connection was down.

BOOM!        o_O

I said:  - Ermmm... Let me try it again buddy!      ..and...

BOOM!       O_O

I had to come back to home. In the way, I was thinking about the port issue.

At home, my first step was to see if those ports were private or public.... and BOOM! Public!

But I didn't have any router Linksys at home to try it again... so my friend Shodan came to the rescue! ;)

The next steps are very bored but, more or less, it was a huge trial and error process and, more or less , this was the PoC conclusion:

Ingredients:
  • Very old linksys: Shodan have a lot of them.
  • Ports 5190 and 5566 open: Shodan  has few of them but it has them.
  • Nmap to confirm some data.
There we go:
$ nmap -sS -p 5190,5566 x.x.x.x

Starting Nmap 5.51 ( http://nmap.org ) at 2014-09-06 08:33 CEST
Nmap scan report for gateway (x.x.x.x)
Host is up (0.080s latency).
PORT     STATE SERVICE
5190/tcp open  aol
5566/tcp open  westec-connect

Curl check:
$ curl -I -q --max-time 5 x.x.x.x
HTTP/1.1 401 Unauthorized
Server:
Date: Sat, 06 Sep 2014 06:31:53 GMT
WWW-Authenticate: Basic realm="Linksys WAG200G "
Content-Type: text/html
Connection: close

Ok. Ingredients Ok. We can run the PoC. Sometimes you need to run it twice or more times. Honestly I don't know why.

$ python OldLinksysMustDie.py x.x.x.x
OldLinksysMustDie v0.001b PoC
 * Connecting to x.x.x.x...
 * Cooking... be patient....
 * On Fire!
><
><
><
[BYEBYE] Ooops! connection to the target lost. [BYEBYE]

Curl again:

curl -I -q --max-time 5 x.x.x.x
curl: (28) Operation timed out after 5001 milliseconds with 0 bytes received

And RAMBO was here! I tried others one and... all temporary dead!!

NOTE: The router is recovered 5-10 minutes after running the PoC, but, if you want a big DOS, a "loop" in the computers world is so easy..... ;)

Finally, my horrible PoC (horrible like my English) in python:

Are you ready John?

Firmware affected: 
  • 1.01.04


Thursday, September 4, 2014

collectd-web for CentOS 6.5

There are RPM packages for collectd in CentOS 6.x, you will need the EPEL repository.

If you want a easy web interface to the graphs there is a collectd-web package there but, that package, is actually the collection3 frontend. So, if you prefer the original collectd-web, then you will need to install it from its github site. The most important will be to create the /etc/collectd/collection.conf file. Be careful, if you want to change the path you will need to modify the sources.

The collection file content for CentOS is something like:
datadir: "/var/lib/collectd/"
And a minimal configuration for the Apache would be like:

Wednesday, August 27, 2014

Using nc and ncat with tor without torify/torsocks

If you have a little headache using torify/torsocks inside of a script with nc and ncat,  you have some interesting parameters to use and they are very very easy.

For example, when you have a tor socks proxy in localhost, using netcat (nc), it would be  similar to:
nc -v -X5 -x localhost:9050 <server> <port>

and for ncat:

ncat -v --proxy localhost:9050 --proxy-type socks5 <server> <port>

Of course, you can use any proxy. It is very useful for scripting ;)

Tuesday, August 5, 2014

iperf as a daemon to test the bandwidth (in RHEL6)

Update: Check the Jason's comments to discover new updates for iperf v3.

Do you need to check the connectivity between your server and your customers? Do you have different customers with different OS? Take a look to iperf.

From https://iperf.fr/:

Iperf features

  • TCP
    • Measure bandwidth
    • Report MSS/MTU size and observed read sizes.
    • Support for TCP window size via socket buffers.
    • Multi-threaded if pthreads or Win32 threads are available. Client and server can have multiple simultaneous connections.
  • UDP
    • Client can create UDP streams of specified bandwidth.
    • Measure packet loss
    • Measure delay jitter
    • Multicast capable
    • Multi-threaded if pthreads are available. Client and server can have multiple simultaneous connections. (This doesn't work in Windows.)
  • Where appropriate, options can be specified with K (kilo-) and M (mega-) suffices. So 128K instead of 131072 bytes.
  • Can run for specified time, rather than a set amount of data to transfer.
  • Picks the best units for the size of data being reported.
  • Server handles multiple connections, rather than quitting after a single test.
  • Print periodic, intermediate bandwidth, jitter, and loss reports at specified intervals.
  • Run the server as a daemon.
  • Run the server as a Windows NT Service
  • Use representative streams to test out how link layer compression affects your achievable bandwidth.

There are pre-compiled binaries for a lot of platforms.

Perhaps you want to leave a daemon running in your server. Great!.
RHEL6 Installation:

  • EPEL repository installed.
  • yum install iperf
  • Open a port in your firewall.
  • Save this init script in /etc/init.d:



  • chmod 755 /etc/init.d/iperfd
  • chkconfig add iperfd 
  • chkconfig --on iperfd
  • And you don't forget to change the port variable in the script ;)



Thursday, July 31, 2014

Nodejs: your global modules don't work after installing

Another linux distribution enigma.

In the name of Tutatis, why f**** the node_modules directories are different in  npm and "distro" packages????. The error message is similar to:
Error: Cannot find module 'any_npm_module'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at repl:1:7
at REPLServer.self.eval (repl.js:110:21)
at repl.js:249:20
at REPLServer.self.eval (repl.js:122:7)
at Interface.<anonymous> (repl.js:239:12)
at Interface.emit (events.js:95:17)
Surprise!!!

CentOS, Fedora, Ubuntu, Debian and MacOSX-ports are using diferent path for the nodejs global modules. If you install the binary of "node" and "npm", and later, you choose to use npm to install some modules, your modules will be dissapeared.

Workarounds:
  1. Compile your own node/npm version (LOL!).
  2. Use local modules always (for command line scripts is very LOL!)
  3. Use NODE_PATH to point to the correct path:
    1. Standard profile: For only one user, you can use .profile, .bash_profile or .bashrc $HOME files (be careful if you are doing login, it uses only one kind of file so choose the correct one).
    2. Developer Eclipse: You can configure environment variables in the properties of the main js file of your project. So you could put NODE_PATH there. 
    3. Daemon/Service profile: Launch from init.d. You could create a correct init.d script with the NODE_PATH inside ;)
    4. Master of node global modules (one ring to bring them all):
      1. Linux: you need to create /etc/profile.d/nodejs.sh with the export NODE_PATH path there. (And restart)
      2. MacOSx: you need to create the crazy and non-existentz /etc/launchd.conf and write this line: setenv NODE_PATH /usr/local/lib/node_modules. And restart.
:) 



Wednesday, July 30, 2014

Glassfish 4: Updating Weld

Summary:

Updating Weld (http://delabassee.com/blog/?p=286):
1. Grab the Weld 2.0.5 OSGi bundle and copy it to the GF modules directory:
cp weld-osgi-bundle-2.0.5.Final.jar glassfish/modules/
2. Restart GlassFish:
glassfish/bin/asadmin restart-domain domain1
3. You check which version is now installed by issuing:
glassfish/bin/asadmin osgi lb | grep 'Weld OSGi'

Thursday, July 24, 2014

Ubuntu/Debian: nodejs package provides nodejs binary and not node binary

Crazy and f*%*%* linux distribution world

Error:
/usr/sbin/node: No such file or directory
Someone in Debian (and Ubuntu then) chose to eliminate "node" binary from nodejs package. Now the binary is "nodejs". So, for Eclipse, and some scripts, node does not work although nodejs package is installed!

Well.... the same guy has preferred to create a second package to solve that. The package is "nodejs-legacy".

So, please, poor Debian/Ubuntu user, you need to remember that when you install node:
apt-get install nodejs nodejs-legacy
It is the life.




Wednesday, July 23, 2014

Nodejs - npm - Error: SELF_SIGNED_CERT_IN_CHAIN

New error in the npm port package in Mavericks (OSX 10.9.4):
npm ERR! Error: SELF_SIGNED_CERT_IN_CHAIN
Workaround:
npm config set ca ""
And re-install the module.

;)

Wednesday, July 16, 2014

Half Dome

Tim Cook:


Me, some days before:
























:)

Thursday, July 3, 2014

Changing ONBOOT value in RHEL/CentOS with networkmanager enabled.

Sometimes the RHEL world has odd things. For example:

If you want to configure a network interface using NetworkManager (without X!!!) you can do it with system-config-network script. From here, you can change the network configuration and DNS/hostname of the box.

What is the problem then?

The network interfaces could be enabled or disabled in boot time. You can see that in the configuration file (etc/sysconfig/network-scripts/ifcfg-<interfacename>) It is something like:
ONBOOT=yes  or ONBOOT=No
But we are talking about using NetwortManager (in that file NM_CONTROLLED=yes) so you cannot edit and change that value directly.

What is the way then?

You go to the famous  system-config-network script and you will discover there is not option to change the ONBOOT value.

WTF!

Well, the way to change this value is very crazy.

You will need export the NetworkManager configuration from terminal:
system-config-network-cmd > export.cfg
Modify the correct line of your network interface. For example:
DeviceList.Ethernet.eth0.OnBoot=False 
to:
DeviceList.Ethernet.eth0.OnBoot=True
Save the file and import it with:
system-config-network-cmd -i < export.cfg
TADAAA!!!

Crazy, i know.


Tuesday, July 1, 2014

Show RAID recovery process from Dell PERC H710 disks on CentOS

You will need to install MegaCLI (read this: http://vicendominguez.blogspot.com.es/2014/02/dell-h710p-equivalent-to-lsi-megaraid.html)

Two commands from here:

  • Show and exit:

/opt/MegaRAID/MegaCli/MegaCli64 -PDRbld -ShowProg -PhysDrv \[32:0\] -a0

  • Continuos:
  /opt/MegaRAID/MegaCli/MegaCli64 -PDRbld -ProgDsply -PhysDrv \[32:0\] -a0

The value [32:0] is a value for [Enclosure:Slot]

To get enclosure number there are two possible commands (choose):
/opt/MegaRAID/MegaCli/MegaCli64 -PDList -aALL
/opt/MegaRAID/MegaCli/MegaCli64 -EncInfo -aALL
Some more info: http://www.maths.cam.ac.uk/computing/docs/public/megacli_raid_lsi.html

:)

Wednesday, June 4, 2014

MTR 0.85 with AS lookup for CentOS/RHEL

From http://www.bitwizard.nl/mtr/:
mtr combines the functionality of the 'traceroute' and 'ping' programs in a single network diagnostic tool.
And MTR 0.85 has a new feature very interesting. Its git repo says:
Add -z / --show-ip support
This new option displays both the hostname and the IP address of a
host in the path. This is particularly important when the hostname
determined by a reverse lookup has no corresponding forward record.
This is similar to the -b (both) option in tracepath, but -b was
already taken in mtr for --bitpattern.
Using this option also implies -w / --report-wide as this option isn't
terribly useful without it in report mode.
In general we endeavor to only show the IP address once per line, but
in the split view we consistently add a separate field for IP address
to the output.
Signed-off-by: Travis Cross <tc@traviscross.com>
So i have created a new RPM package for the 0.85 version. It is only for terminal. No X sorry.

It looks great with the -z option:

  1. AS6461  50.7.157.57           0.0%    10    0.9   0.8   0.7   1.2   0.0
  2. AS6461  ae1.mpr1.fra4.de.abo  0.0%    10    0.2   0.2   0.1   0.4   0.0
  3. AS6461  zayo-google.fra4.de.  0.0%    10    9.1   9.1   9.0   9.2   0.0
  4. AS15169 209.85.240.64         0.0%    10   14.6  14.9  14.6  16.4   0.5
  5. AS15169 72.14.234.233         0.0%    10   16.5  14.7  14.3  16.5   0.6
  6. AS15169 72.14.235.15          0.0%    10   16.1  16.1  16.1  16.3   0.0
  7. AS15169 209.85.240.188        0.0%    10   39.8  46.2  39.7  84.8  14.9
  8. AS15169 209.85.240.97         0.0%    10   40.0  40.0  40.0  40.0   0.0
  9. AS15169 mad01s15-in-f0.1e100  0.0%    10   39.7  39.7  39.3  40.0   0.0

You can download from here:  http://repo.enetres.net/repoview/mtr.html



Thursday, May 29, 2014

It is not possible to search by OS in ansible-galaxy (yet?)

My twits to the Ansible creator:

So... if you find a interesing role may be you cannot use it. You need to verify role by role if it is correct for your OS or not..... there is not another way. :(

Tuesday, May 13, 2014

One Makefile to rule them all

In a project, if you have some Makefiles in differents directories and you need to compile all of them, you can try it with a "master" Makefile like this:




:)

Wednesday, April 23, 2014

No difference: Apple Support is a blackhole.

Oh my ... guys. I am losing my battle with Apple. It makes no to me but it is happening. The past couple of days ... have been pure-shit. Grrr!

If you think your super expensive laptop made in the USA is better than a super cheap model  made in China/Korea/Spain, then think again! Apple, as company, works hard to maintain its  image as being perfect (including the support they provide).... but as I recently learned, they are no different than any other company. It is only an image for your brain, it is marketing.

In the 20 years that I have been using laptops, I have NEVER EEEEEEEVER had as many  problems as I am having with my Apple laptop. I repeat NEVER has a battery exploded. Yes, you heard me! Exploded like a BOMB! I have had a number of laptops – including laptops from HP, IBM, Lenovo, and Acer - and have never had this happen to me. The batteries in those laptops generally last for five to six years. While that isn’t great, it’s standard for the market.

This time, after just four years, the battery in my macbook exploded. Oh man! This is not normal. You could tell me the battery would end its life without charge … but my macbook went from holding a charge for 3 hours (about 500 cycles) to exploding. Apple is telling me it is normal. WTF! That is not normal. It must be investigated. Either Apple didn’t listen to me or they don't want to listen to me. I expected the battery in my macbook to last for at leaste another 2-3 years. After all, that is the norm … and it is what I expected having paid such a hefty price for it. Grrrr!

So, now I ask myself - Why did I pay double the price for a macbook, if it will only last half the  time? It makes no sense. All I can say is … Lesson learned!

My incident ID: 597671455. Apples response: "Normal end. We don't have a procedure to solve it"

Here are pictures of my laptop – post explosion:




In this thread (Mavericks - power use / service battery) there are right now: 65658 Views and 342 Replies. Nothing more to say.

:(

(thanks for the translation Ken)

Update1: 10.9.3 does not solve the issue (at the moment)
Update2: I lost my battle. I have bought this battery: http://www.amazon.es/gp/product/B008XXK5XS/ref=oh_details_o00_s00_i00?ie=UTF8&psc=1
Update3: At the moment: two weeks with this new battery and i think the temperature is higher than the original battery when it is charging but it is ok. About 4.30h of battery autonomy.




Tuesday, April 22, 2014

Compiling Boost 1.55 in CentOS 6

Fast:
  1. EPEL repository
  2. Updated system
  3. yum groupinstall "Development tools"
  4. yum install boost*
  5. yum remove boost* (last two steps to get initial dependencies)
  6. yum install libicu libicu-devel zlib-devel zlib python-devel bzip2-devel texinfo
  7. Download tarball of boost_1_55 and extract
  8. /bootstrap.sh --with-icu
  9. ./b2 -d0 -a
:)

Thursday, April 10, 2014

MyDumper 0.6.1 RPM for CentOS 6.5


I made the last 0.6.0 mydumper RPM package a few days ago but there is a new release/version.

From MySQL Performance Blog:

mydumper 0.6.0

  • #1250269 ensure statement is not bigger than statement_size
  • #827328 myloader to set UNIQUE_CHECKS = 0 when importing
  • #993714 Reducing the time spent with a global read lock
  • #1250271 make it more obvious when mydumper is not successful
  • #1250274 table doesnt exist should not be an error
  • #987344 Primary key name not quoted in showed_nulls test
  • #1075611 error when restoring tables with dashes in name
  • #1124106 Mydumper/myloader does not care for 0-s in AUTO_INCREMENT fields
  • #1125997 Timestamp data not portable between servers on differnt timezones

mydumper 0.6.1

  • #1273441 less-locking breaks consistent snapshot
  • #1267501 mydumper erroneously always attempts a dummy read
  • #1272310 main_connection keep an useless transaction opened and permit infinite metadata table lock
  • #1269376 mydumper 0.6.0 fails to compile “cast from pointer to integer of different size”
  • #1265155 create_main_connection use detected_server before setting it
  • #1267483 Build with MariaDB 10.x
  • #1272443 The dumping threads will hold metadata locks progressively while are dumping data.
So we have a new RPM package (centos 6) for 0.6.1 in the repo:

Monday, April 7, 2014

libfdk_aac for CentOS 6.5

From ffmpeg webpage:

Fraunhofer FDK AAC codec library. This is currently the highest-quality AAC encoder available with ffmpeg. Requires ffmpeg to be configured with --enable-libfdk_aac (and additionally --enable-nonfree if you're also using --enable-gpl). But beware, it defaults to a low-pass filter of around 14kHz. If you want to preserve higher frequencies, use -cutoff 18000. Adjust the number to the upper frequency limit you prefer.

Ok. Here my own RPM of that library for CentOS 6.5:



Sorry. No time friend. It is not tested. :(


Friday, March 21, 2014

Lighttpd: full site with SSL with exception of a url / path / directory

Well.. perhaps you need SSL in your site but you want to exclude a concrete path / url. This is my config:


Wednesday, March 19, 2014

curl + IPv6: common errors using direct IP

FAIL!
$ curl -6 "http://a00:1450:4007:805::1018/" 
curl: (3) IPv6 numerical address used in URL without brackets 
FAIL!
$ curl -6 "http://[a00:1450:4007:805::1018]/"
curl: (3) [globbing] bad range in column 13
BINGO!
$ curl -g -6 "http://[2a00:1450:4007:805::1018]/"

302 Moved 
The document has moved

Wednesday, March 12, 2014

Myloader: resolving errors with no clear message

If you have a special configuration in mysqld (no default config) perhaps you will find these two cases of error working with myloader tool:

1) The first issue looks like a problem with the backup file...... but it is not. Message:
** (myloader:42526): CRITICAL **: cannot open file vcadminweb.message.sql.gz (24) * (myloader:46190): CRITICAL
In my case, I needed to check and to change the open files:
ulimit -n 10000        ..... and solved!
2) The second one looks like a problem accessing to the database.... but it is not. Message:
**: Error switching to database CMSB076 whilst restoring table multidevicevideo_players
In my case, it was a timewait very low in the mysqld config.
wait_timeout       = 300 
solved the issue for me.

Monday, March 10, 2014

Mysql: Drop database with dash in name

The issue:
mysql> drop database my-web;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-web' at line 1
mysql> drop database "my-web";
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"my-web"' at line 1
mysql> drop database 'my-web';
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''my-web'' at line 1
 
The trick:
mysql> drop database `my-web`;
Query OK, 83 rows affected, 1 warning (0,02 sec)

Wednesday, March 5, 2014

Glassfish 4 tricks on CentOS 6

Three tricks here.

One. Init.d script for CentOS 6:




Two. Changing umask permission in Glassfish 4:

Add "umask 002" just before the "exec" line in $GLASSFISH_HOME/bin/asadmin

Three. Changing general logging file in Glassfish 4:

Change logging.properties file line to (by example):
 com.sun.enterprise.server.logging.GFFileHandler.file=/var/log/glassfish/server.log

Friday, February 28, 2014

GPAC RPM CentOS 6

Please, I want to remind you my gpac rpm package:




 for CentOS, it needs the atrpms repo:




and you need to enable the repo. And you need this too: http://repo.enetres.net/x86_64/gpac-libs-0.5.0-2_noX.el6.x86_64.rpm

Note: This compiled version is without xul (mozilla),gtk,imagemagick,X dependencies.

Monday, February 24, 2014

MyDumper 0.6.0 RPM for CentOS 6.5

I had to make a tiny modification of the original mydumper 0.2 REMI SPEC file  (https://github.com/remicollet/remirepo/blob/master/mydumper/mydumper.spec). For me, the "if" condition had problems in my rpmbuild environment.

Here the forked 0.6.0 SPEC file:

https://github.com/vicendominguez/CentOS-SPECS/blob/master/mydumper-0.6.0.spec

The rpm can be found here:

http://repo.enetres.net/repoview/mydumper.html

:)


Wednesday, February 19, 2014

Firewall rules to redirect Glassfish 4.0 ports (80,443,8080 and 8181 will be open to internet)

Hello! Fast trick to redirect ports when you are using an java application server.

Be careful because 80, 443, 8080 and 8181 ports will be open to Internet.  If you want to close the 8080 and 8181 you will need an AJP proxy or to mark packets in the firewall. Read this link:

http://stackoverflow.com/questions/11065124/iptables-redirect-80-to-8080-but-block-public-8080-access

:)

Tuesday, February 18, 2014

Dell H710P is equivalent to LSI MegaRAID SAS 9266-8i

Dell H710P RAID card is equivalent to LSI MegaRAID SAS 9266-8i so if you want MegaCLI tool to manage the card then you can go to LSI webpage:

http://www.lsi.com/support/Pages/download-results.aspx?component=Storage+Component&productfamily=RAID+Controllers&productcode=P00584&assettype=0&productname=MegaRAID+SAS+9266-8i

"Management Software and Tools" tag and download the MegaCLI for Linux. I am using CentOS 6.x. Inside of the zip file there is a RPM. It works!
To show all cards in your box:

/opt/MegaRAID/MegaCli/MegaCli64 AdpAllInfo -aALL

PS: Check check!

$ /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -LALL -aALL | grep State

State               : Optimal


Good Luck!

:)

Tuesday, February 11, 2014

CentOS SPEC files repo created

I have created a new repo with some of my SPEC files for CentOS:




And you can find RPMs packages generated by my SPEC files for CentOS 6 here:




;)

Wednesday, January 29, 2014

sockstat-zabbix-template monitor

Zabbix 2.2 comes with support of loadable modules for extending Zabbix agent and server without sacrificing performance.
A loadable module is basically a shared library used by Zabbix server or agent and loaded on startup. The library should contain certain functions, so that a Zabbix process may detect that the file is indeed a module it can load and work with.
Loadable modules have a number of benefits. Great performance and ability to implement any logic are very important, but perhaps the most important advantage is the ability to develop, use and share Zabbix modules. It contributes to trouble-free maintenance and helps to deliver new functionality easier and independently of the Zabbix core code base.
I have created a agent module to parse the /proc/net/sockstat info for Zabbix > 2.2.x
You will be able to watch the orphan sockets or the timewait sockets. They are interesting for: DDOS detection, leaks in webapps services etc etc...
Screenshot:

Friday, January 17, 2014

wowza-zabbix-template monitor

This is a minimal template to get info about your wowza rest url in your Zabbix Platform.
Two items, by now:
  • Global connections in the Wowza
  • Global Live streams number
The template uses Zabbix macros to define the user/pass Wowza server url. It permits a fast configuration because of you can apply the same template to all your wowza hosts and to change the user/pass usermacros per host only.
Screenshot:

Tuesday, January 7, 2014

Tor Zeigeist 2013

A tiny translation of my post: http://www.securityartwork.es/2014/01/07/tor-zeigeist-2013/

My steps for this report:

  • I created a .onion domain
  • I kept it hidden to Tor (without wiki publishing...)
  • After that, I created in that .onion domain one static web page: Gray background and only one "post" box like Google search webpage.
  • I Put a little tittle: "search engine" or similar
  • And I "Logged" the words in the search box.
  • Now we had to wait... exactly 6 months.

This is the result:


Welcome to TOR top-search-words. Oh my...!

:O