Friday, November 25, 2011

Postgresql - Handling 'NaN'::float -> crosstab function

Today AVR asked me: how do I compare against a NaN ? And I was like you mean to an AND negated ? LOL.
It turns out that 'NaN' is a simbolic value meaning "Not a number".

In addition to ordinary numeric values, the numeric type allows the special value NaN, meaning "not-a-number". Any operation on NaN yields another NaN. When writing this value as a constant in an SQL command, you must put quotes around it, for example UPDATE table SET x = 'NaN'. On input, the string NaN is recognized in a case-insensitive manner.

Note: In most implementations of the "not-a-number" concept, NaN is not considered equal to any other numeric value (including NaN). In order to allow numeric values to be sorted and used in tree-based indexes, PostgreSQL treats NaN values as equal, and greater than all non-NaN values.

In addition to ordinary numeric values, the floating-point types have several special values:

If you want to compare against the 'NaN' ("not a number") float value, then do an explicit cast to float: float('NaN').

Here are some very good insights regarding this topic.

This issue arose due to the apparent fact that the function crosstab returns this symbolic constant.

In the end she did this:
and seemed to work.

Wednesday, November 23, 2011

Postgresql - Transposing a table with crosstab

Today we were asked to accomplish a simple task, show the count of  "tramites" over time for each "oficina registral" ; spago expects a single query to generate the related chart. My first approximation to the issue was a series of virtual tables (19) full joined over the date_key field, which isn't very pretty. RAF came with a neat solution in no time: to use the tablefunc functions contained in the postgres' contrib module.

Once installed the contrib module, you enable the tablefunc functions with something like:
psql template1 < /usr/share/postgresql/8.4/contrib/tablefunc.sql
so they're available for every new database you create. If the database keen to use them exists already, replace the template1 with the database name.
The resulting query isn't that pretty either, but it's very efficient. (at least runs faster than my original solution)

select * from crosstab('SELECT t.date_key::varchar(8) row_name,t.oficina_registral_key category, sum(cantidades_tramites) as value FROM 
fact_tramite_historico t, dim_oficina_registral dor
WHERE t.oficina_registral_key = dor.oficina_registral_key
group by  t.date_key,t.oficina_registral_key order by 1,2','select oficina_registral_key from dim_oficina_registral order by 1') as tab(fecha varchar(8),cat1 bigint,cat2 bigint,cat3 bigint,cat4 bigint,cat5 bigint,cat6 bigint,cat7 bigint,cat8 bigint,cat9 bigint,cat10 bigint,cat11 bigint,cat12 bigint,cat13 bigint,cat14 bigint,cat15 bigint,cat16 bigint,cat17 bigint,cat18 bigint,cat19 bigint,cat20 bigint);

It only remains to replace catN for its correspondent "oficina registral" name.
Clarification: each catN is declared as bigint because the aggregate function sum returns that data type.


Displaying aggregate function attributes,example
postgres=#\da count

Tuesday, November 22, 2011

Linux - ssh -X

To launch graphical programs from a ssh session, the -X flag must be specified.

from man ssh
-X Enables X11 forwarding. This can also be specified on a per-host basis in a configuration file.

X11 forwarding should be enabled with caution. Users with the ability to bypass file permissions
on the remote host (for the user's X authorization database) can access the local X11 display
through the forwarded connection. An attacker may then be able to perform activities such as key-
stroke monitoring.

For this reason, X11 forwarding is subjected to X11 SECURITY extension restrictions by default.
Please refer to the ssh -Y option and the ForwardX11Trusted directive in ssh_config(5) for more

I need to investigate the implications of those security warnings. It would be fantastic to implement that keystroke monitoring program.

Monday, November 21, 2011

Postgresql - Explain Analyze

I was researching for a way of telling how long a query will take.
This command displays the execution plan that the PostgreSQL planner generates for the supplied statement. The execution plan shows how the table(s) referenced by the statement will be scanned — by plain sequential scan, index scan, etc. — and if multiple tables are referenced, what join algorithms will be used to bring together the required rows from each input table.

The most critical part of the display is the estimated statement execution cost, which is the planner's guess at how long it will take to run the statement (measured in units of disk page fetches). Actually two numbers are shown: the start-up time before the first row can be returned, and the total time to return all the rows. For most queries the total time is what matters, but in contexts such as a subquery in EXISTS, the planner will choose the smallest start-up time instead of the smallest total time (since the executor will stop after getting one row, anyway). Also, if you limit the number of rows to return with a LIMIT clause, the planner makes an appropriate interpolation between the endpoint costs to estimate which plan is really the cheapest.

The ANALYZE option causes the statement to be actually executed, not only planned. The total elapsed time expended within each plan node (in milliseconds) and total number of rows it actually returned are added to the display. This is useful for seeing whether the planner's estimates are close to reality.

One thing worth having in mind is that the analyzing imposes overhead over the sql statement:

In order to measure the run-time cost of each node in the execution plan, the current implementation of EXPLAIN ANALYZE can add considerable profiling overhead to query execution. As a result, running EXPLAIN ANALYZE on a query can sometimes take significantly longer than executing the query normally. The amount of overhead depends on the nature of the query.


Useful resources for further reading:

Explain Analyze relies on planner statistics, so it's important to have up to date statistics to have the planner making good decisions about how to execute a sql command.

Sunday, November 20, 2011

Linux - recode tool

A great solution to deal with databases with different encodings.

This recode program has the purpose of converting files between various character sets and usages. When exact transliterations are not possible, as it is often the case, the program may get rid of the offending characters or fall back on approximations.

for instance, converting one dump file with one charset to another:
cat dump_with_one_char_set.sql | recode iso-8859-1..u8 > dump_with_utf8.sql

furthermore, you'll need to edit the file in this line:
SET client_encoding = 'oldEncoding';
and change it to:
SET client_encoding = 'newEncoding';

Postgresql - pg_bulkload

pg_bulkload -- it provides high-speed data loading capability to PostgreSQL users.
pg_bulkload is designed to load huge amount of data to a database. You can choose whether database constraints are checked and how many errors are ignored during the loading. For example, you can skip integrity checks for performance when you copy data from another database to PostgreSQL. On the other hand, you can enable constraint checks when loading unclean data.

It could be a good option for loading data in our future ETL's. It also supports parallel loading.

Databases - Benchmarking

This can be a good option to do benchmarking, in our database tunning training.

And here are some related implementations

We also have pgbench.
One thing I need to evaluate the features that each benchmark test provides.

Wednesday, November 16, 2011

Postgresql - My first trigger ever!!

Today I stumbled upon triggers for the first time ever. This one updates a "calculated" geometry column in a postgis enabled database.

CREATE FUNCTION llenar_geom4326 () RETURNS trigger AS $$
NEW.geom_4326 = ST_GeomFromText('POINT(' || NEW.longitud || ' '|| NEW.latitud || ')',4326);
$$ LANGUAGE plpgsql;

ON geo.lugar FOR EACH ROW
EXECUTE PROCEDURE llenar_geom4326 ();

Once I get passed that Oracle certification issue, I'll study plpgsql in depth.


Cell Processor - Programming the Cell Processor. Part 1

Continuing my research on the cell processor.


The next step is to get my hands dirty with the IBM SDK.

One thing I haven't decided yet is whether to use yellow dog linux where it comes preinstalled (the open part of it) or to set everything up from scratch with fedora 7.

Another useful reference regarding the programming context of the cell processor.

Yellow Dog Linux:

Tuesday, November 15, 2011

Postgresql - Displaying Role Attributes

From man psql

\du[+] [ pattern ]
Lists database roles. If pattern is specified, only those roles whose names match the
pattern are listed. If the form \du+ is used, additional information is shown about each
role, including the comment for each role.

For instance:

switch_store=# \du+
Lista de roles
Nombre de rol | Atributos | Miembro de | Descripción
curso_sql | Superusuario, Crear BD | {} |

Monday, November 14, 2011

Postgresql - array / array_to_string function (packing a subquery intoa single field)

Just for the sake of it, I wanted to concatenate the result set of a subquery into a single string field . I never thought it would be this easy!:

[sourcecode language="sql"]

CREATE OR REPLACE VIEW musicbrainz.artistas_mexicanos AS
SELECT AS id_artista, AS nombre,
array_to_string(array(select trim( from artist art left join artist_alias aa on aa.artist = and =
left join artist_name artn on = where =
),',') as alias, AS tipo_artista, AS genero, AS pais
FROM artist a
LEFT JOIN artist_name an ON =
LEFT JOIN country c ON =
LEFT JOIN gender g ON = a.gender
LEFT JOIN artist_type at ON = a.type
WHERE = 'Mexico'::text;

The cute part being:

array_to_string(array(select trim( from artist art left join artist_alias aa on aa.artist = and =
left join artist_name artn on = where =


Sunday, November 13, 2011

Oracle - shared pool and database buffer cache

It takes its time to make them, but these maps help me see things in a more structured fashion, so I can give a glance to the whole picture.

Friday, November 11, 2011

Postgresql - the pgpass file

In this file we can store our access credentials to a database(es) to skip typing them each time. Of course,some security precautions are in order.

30.13. The Password File

The file .pgpass in a user's home directory or the file referenced by PGPASSFILE can contain passwords to be used if the connection requires a password (and no password has been specified otherwise).

This file should contain lines of the following format:


Postgresql - Looking at database objects permissions

From man psql

\z [ pattern ]
Lists tables, views and sequences with their associated access privileges. If a pattern is
specified, only tables, views and sequences whose names match the pattern are listed.

This is an alias for \dp (“display privileges”).

Decoding the output:

Psql - show permissions on database objects
=> \z [object]
r -- SELECT ("read")
w -- UPDATE ("write")
a -- INSERT ("append")
x -- REFERENCES (foreign keys)
arwdRxt -- ALL PRIVILEGES (for tables)
* -- grant option for preceding privilege
/yyyy -- user who granted this privilege

As simple as that.

Postgresql - Move a set of tables from one schema to another | Cheat sheet

Of course, the destination schema must exist first =-).

And then thanks to our friend the "ALTER TABLE" command you can issue something like:

General structure
ALTER TABLE myschema.mytable SET SCHEMA yourschema;
[sourcecode language="sql"]

musicbrainz_db=# select 'alter table dw.'|| tablelist.tablename || ' set schema musicbrainz ; ' from (select tablename from pg_tables where schemaname = 'dw' and tablename not like 'pga%') as tablelist ;


The only trick here ,is knowing that  the catalog of tables resides in the view pg_catalog.pg_tables.


As a foot note, I found this pretty useful cheat sheet for postgresql: I reproduce here as a backup:


Create the filesystem

$ export PGROOT="/var/lib/postgres"
$ mkdir -p $PGROOT/data && chown postgres.postgres $PGROOT/data
$ su - postgres -c "/usr/bin/initdb -D $PGROOT/data"

Make it UTF8 by default:

$ su - postgres -c "/usr/bin/initdb -E utf8 --locale=en_US.UTF-8 \

Create Databases

$ createdb -O owner -T some_template database_name

By default, PostgreSQL listens on TCP port 5432.

Dump all databases

pg_dumpall --clean > databases.sql
Dump a database with compression (-Fc)

pg_dump -Fc --file=database.sql --clean database
Dump a table

pg_dump [-d database] [--schema schema] -t table
Dump a table definition (no data)

pg_dump -s [-d database] [--schema schema] -t table
Restore a database from a dump file

pg_restore -Fc database.sql
Start the PostgreSQL interactive terminal

Reset password of postgres user
# su postgres
# psql -d template1


Psql - show a list of databases
=> \l
Lowercase L, not the number 1
Psql - show all users
=> select * from pg_user;
Psql - show all tables (including system tables)
=> select * from pg_tables;
Psql - show tables in the current context (database/schema)
=> \d
Psql - change current database
=> \c database;
Psql - show all schemas in the current database
=> \dn
Psql - Grant permissions on a schema to a user
=> GRANT ALL ON myschema TO user;
Psql - quit psql
=> \q
Psql - show help
=> \?
Psql - copy a table to a tab delimeted file
=> COPY table TO 'table.txt';
Psql - load a table from a tab delimeted file
=> COPY table FROM 'table.txt';
Psql - show permissions on database objects
=> \z [object]
r -- SELECT ("read")
w -- UPDATE ("write")
a -- INSERT ("append")
x -- REFERENCES (foreign keys)
arwdRxt -- ALL PRIVILEGES (for tables)
* -- grant option for preceding privilege
/yyyy -- user who granted this privilege
Run the vacuum utility
=> vacuumdb --verbose --analyze --all
Note: vacuum reclaims space from deleted records and updates indexes. It should
be set up in cron. Newer versions of postgresql may run vacuum automatically.
Increase perfomance with shared memory
One effective performance tuning tip for Postgresql is to increase the shared
memory buffers. This might require adding RAM to the server. Many Linux distros
default to 32MB of shared memory, controlled by two kernel parameters:

These values can be changed at run time, but it is better to set them at boot
using the /etc/sysctl.conf file. This increases shared memory to 1GB:
# increase shared buffers for postgres at boot
Then, tell PostgreSQL to use 768MB of the 1GB available in the
/var/lib/pgsql/data/postgresql.conf file:
shared_buffers = 98304 # min 16, at least max_connections*2, 8KB each

Restart PostgreSQL for the change to take effect.

brew install postgresql

To build plpython against a specific Python, set PYTHON prior to brewing:
PYTHON=/usr/local/bin/python brew install postgresql

If this is your first install, create a database with:
initdb /usr/local/var/postgres

Automatically load on login with:
launchctl load -w

and if that fails try the homebrew mysql launchd approach:
cp /usr/local/Cellar/postgresql/8.4.4/org.postgresql.postgres.plist
launchctl load ~/Library/LaunchAgents/org.postgresql.postgres.plist

Or start manually with:
pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log

And stop with:
pg_ctl -D /usr/local/var/postgres stop -s -m fast

If you want to install the postgres gem, including ARCHFLAGS is recommended:
env ARCHFLAGS="-arch x86_64" gem install pg

PostgreSQL Examples

Adding new user called BRIAN:
$ sudo su - postgres
$ createuser --createdb --username postgres --no-createrole \
--pwprompt BRIAN
Enter password for new role:
Enter it again:
Shall the new role be a superuser? (y/n) n
$ exit


--Add a unique constraint to the email column in the customer table
ALTER TABLE customer ADD CONSTRAINT customer_email_key UNIQUE (email);

Wednesday, November 9, 2011

Oracle - wrapping sqlplus with rlwrap

A tiny but powerful tip is to wrap around the sqlplus utility with rlwrap so you can have autocompletion and history recall functions, editing line mode and reverse search.

One prerrequisite is to have the readline development libraries which in my particular case (centos 5) are available under the name readline-devel, so

yum install readline-devel

once you've downloaded and installed rlwrap

you call sqlplus with something like:

awesome=# rlwrap sqlplus

and have a lot of fun!




Postgresql - Documenting your database system

Documentation is very important in every system. Database systems are not an exception. You can attach comments to virtually any database object.



COMMENT -- define or change the comment of an object


AGGREGATE agg_name (agg_type [, ...] ) |
CAST (source_type AS target_type) |
COLLATION object_name |
COLUMN relation_name.column_name |
CONSTRAINT constraint_name ON table_name |
CONVERSION object_name |
DATABASE object_name |
DOMAIN object_name |
EXTENSION object_name |
FOREIGN TABLE object_name |
FUNCTION function_name ( [ [ argmode ] [ argname ] argtype [, ...] ] ) |
INDEX object_name |
LARGE OBJECT large_object_oid |
OPERATOR operator_name (left_type, right_type) |
OPERATOR CLASS object_name USING index_method |
OPERATOR FAMILY object_name USING index_method |
[ PROCEDURAL ] LANGUAGE object_name |
ROLE object_name |
RULE rule_name ON table_name |
SCHEMA object_name |
SEQUENCE object_name |
SERVER object_name |
TABLE object_name |
TABLESPACE object_name |
TEXT SEARCH PARSER object_name |
TRIGGER trigger_name ON table_name |
TYPE object_name |
VIEW object_name
} IS 'text'


COMMENT stores a comment about a database object.

Only one comment string is stored for each object, so to modify a comment, issue a new COMMENT command for the same object. To remove a comment, write NULL in place of the text string. Comments are automatically dropped when their object is dropped.

In order to see the comments attached to a database object, you can get them through psql with the \dd [object] option, for instance:
musicbrainz_db=# \dd artistas_mexicanos
Descripciones de objetos
-[ RECORD 1 ]-------------------------------------------------------------------------------------------------------------------------------------
Esquema     | musicbrainz
Nombre      | artistas_mexicanos
Objeto      | vista
Descripción | Vista con artistas que hacen referencia al país con código 138 - Mexico (el problema es que muchos artistas tienen ese dato en null)

To get the comments of a particular set of database objects, you append a plus sign to the psql option, like this:
musicbrainz_db=# \dv+
Listado de relaciones
-[ RECORD 1 ]-------------------------------------------------------------------------------------------------------------------------------------
Esquema     | musicbrainz
Nombre      | artistas_mexicanos
Tipo        | vista
Dueño       | postgres
Tamaño      | 0 bytes
Descripción | Vista con artistas que hacen referencia al país con código 138 - Mexico (el problema es que muchos artistas tienen ese dato en null)
-[ RECORD 2 ]-------------------------------------------------------------------------------------------------------------------------------------
Esquema     | musicbrainz
Nombre      | canciones_mexicanas
Tipo        | vista
Dueño       | postgres
Tamaño      | 0 bytes
Descripción |

Another reference:

Postgresql - GRANT SELECT ON ALL TABLES IN SCHEMA myschema TO myuser;

Very nice shortcut to creating a read only user, without specifying table by table manually.

Don't forget to grant him connect to the database and usage on the schema too.
GRANT CONNECT ON DATABASE musicbrainz_db TO curso_sql;

GRANT USAGE ON SCHEMA musicbrainz TO curso_sql;

Postgresql - Persistent default search path (schema) in a per user basis

I'm entitled with the task of giving a sql course to my coworkers and I want to spare them the details of dealing with schemas or "search paths" in the postgresql lingo, as the tables we'll be working with are not in the public schema, and I don't want to go into the  details of what a schema is, just yet, as many of them don't have any previous experience with databases.

So I created them a user, and to change it's default search path you only need to fire this up:
ALTER USER username SET search_path = schema1,schema2,schema3,etc;

But have in mind that if you plan to work with this user in another databases where the set schema doesn't exist, you won't be able to see any table unless you set the correct search_path. So I guess, you better always add the public schema to the default search path for the user.

Tuesday, November 8, 2011

Oracle - Undo data - Part 1

Very interesting Oracle's way of ensuring data consistency  and providing support for rollback and recovery operations.

I wish the day was a little bit longer.

Postgresql - Renaming a database

As simple as that:
ALTER DATABASE name RENAME TO newname Of course there's a lot of other attributes you can change with the "ALTER DATABASE" instruction.



ALTER DATABASE -- change a database


ALTER DATABASE name [ [ WITH ] option [ ... ] ]

where option can be:


ALTER DATABASE name SET parameter { TO | = } { value | DEFAULT }

I didn't know that "CONNECTION LIMIT" parameter! So you can specify in a database basis how many  connections are allowed!. And all this for free! The more I study oracle the more I love postgresql =-).

Monday, November 7, 2011

Oracle - Network Environment - intro

A brief depiction of the Oracle Network Environment, I wish I had the time to make more mental maps.

Postgresql - set datestyle

Sometimes you're moving data and format differences arise. One of the most common differences is between date formats.
To set the general date/time output format, the SET command can be applied to the run-time variable DATESTYLE. This variable may be set to one of four available general styles shown in Table 3-18.

Table 3-18. Date output formats

General formatDescriptionExample
ISOISO-8601 standard2001-06-25 12:24:00-07
SQLTraditional SQL style06/25/2001 12:24:00.00 PDT
PostgresOriginal PostgreSQL styleMon 25 Jun 12:24:00 2001 PDT
GermanRegional style for Germany25.06.2001 12:24:00.00 PDT

As an example, you can use the following SQL statement to set the date style to SQL :
booktown=#  SET DATESTYLE TO SQL; 

If you perform a SELECT current_timestamp query after setting this variable, PostgreSQL should return the current time using the ISO format as instructed:
booktown=#  SELECT current_timestamp; 
08/10/2001 13:25:55.00 PDT
(1 row)

The SHOW command can be used to display the current value of the DATESTYLE variable while PostgreSQL is running.
booktown=#  SHOW DATESTYLE; 
NOTICE: DateStyle is SQL with US (NonEuropean) conventions

In addition to these general formats, PostgreSQL's date output format has two other variants which further describe how to display the date, shown in Table 3-19: European and non-European (U.S.). These determine whether the format is day followed by month, or vice versa. This variation can be applied on top of the previous four general formats with the same syntax to SET DATESTYLE and will not modify your chosen format except for the arrangement of the month and day.

Postgresql - Bit datatype

I've had never used it before. Quite an interesting datatype
Bit strings are strings of 1's and 0's. They can be used to store or visualize bit masks. There are two SQL bit types: bit(n) and bit varying(n), where n is a positive integer.

bit type data must match the length n exactly; it is an error to attempt to store shorter or longer bit strings. bit varying data is of variable length up to the maximum length n; longer strings will be rejected. Writing bit without a length is equivalent to bit(1), while bit varying without a length specification means unlimited length.

Sunday, November 6, 2011

Postgresql - Database object identifier (OID)

This post is associated with the other post where I described the error:

ERROR: could not read block 18875 of relation base/105574/105673: read only 0 of 8192 bytesERROR:

It turns out that the number 105574 refers to the database object identifier (database OID), which can be queried with something like this:

[sourcecode language="sql"]

postgres=# select * from pg_database where oid = 105574;
-[ RECORD 1 ]-+------------
datname       | dw_ifrembi
datdba        | 10
encoding      | 6
datcollate    | es_MX.UTF-8
datctype      | es_MX.UTF-8
datistemplate | f
datallowconn  | t
datconnlimit  | -1
datlastsysoid | 11563
datfrozenxid  | 648
dattablespace | 1663
datconfig     |
datacl        |


The funny thing is that the OID is not displayed in the view, so the query can be tweak to something like:

[sourcecode language="sql"]

postgres=# select oid,* from pg_database where oid = 105574;
-[ RECORD 1 ]-+------------
oid           | 105574
datname       | dw_ifrembi
datdba        | 10
encoding      | 6
datcollate    | es_MX.UTF-8
datctype      | es_MX.UTF-8
datistemplate | f
datallowconn  | t
datconnlimit  | -1
datlastsysoid | 11563
datfrozenxid  | 648
dattablespace | 1663
datconfig     |


Postgresql - The pg_class table

Another very useful catalog table.
The catalog pg_class catalogs tables and most everything else that has columns or is otherwise similar to a table. This includes indexes (but see also pg_index), sequences, views, composite types, and TOAST tables; see relkind. Below, when we mean all of these kinds of objects we speak of "relations". Not all columns are meaningful for all relation types.

For instance in another post I have described the following error:

ERROR: could not read block 18875 of relation base/105574/105673: read only 0 of 8192 bytesERROR:

[sourcecode language="sql"]

dw_ifrembi=> select * from pg_class where relfilenode = 105673;
-[ RECORD 1 ]--+---------------------------------------------------
relname        | fact_servicio_referencia_pago
relnamespace   | 2200
reltype        | 105675
relowner       | 10
relam          | 0
relfilenode    | 105673
reltablespace  | 0
relpages       | 15766
reltuples      | 1.15017e+06
reltoastrelid  | 0
reltoastidxid  | 0
relhasindex    | t
relisshared    | f
relistemp      | f
relkind        | r
relnatts       | 11
relchecks      | 0
relhasoids     | f
relhaspkey     | t
relhasrules    | f
relhastriggers | t
relhassubclass | f
relfrozenxid   | 11192264
relacl         | {postgres=arwdDxt/postgres,ifrem=arwdDxt/postgres}
reloptions     |


In this case the number 105673 being the relfilenode, gave me the name of the relation where this issue was arising.

Postgresql - Reindexing a table

The argument is:

  • newly created indexes are faster

  • indexes tend to become bloated with frequent updates of keys.

I also found this interesting article related to the bloating issue.

I came to this subject brought by this error:

ERROR: could not read block 18875 of relation base/105574/105673: read only 0 of 8192 bytesERROR:

reindexing the table that triggered that error was a solution this time.

Saturday, November 5, 2011

Tablespaces in postgresql

Managing tablespaces in postgresql is a really straightforward process.
Tablespaces in PostgreSQL allow database administrators to define locations in the file system where the files representing database objects can be stored. Once created, a tablespace can be referred to by name when creating database objects.
The location must be an existing, empty directory that is owned by the PostgreSQL operating system user.

Regarding the use of symbolic links I thought windows didn't have them, but it turns out that I was wrong.

Anyway,  it's a known fact that windows does not perform well with databases overall compared to other OS, so...who cares about windows ? =-).

Moving a tablespace

I found this piece of advice:

which leads to the question: how to move database objects to a different tablespace?

Lots of Manual Work

Moving a complete database to a different tablespace involves 3 steps :

  1. You have to tell PostgreSQLto create new tables and indexes for the database in the new tablespace. This can be done with :


    ALTER DATABASE "[database_name]" SET default_tablespace = [new_tablespace];

  2. Then you have to move each table in that database to the new tablespace with :


    ALTER TABLE "[table_name]" SET TABLESPACE "[new_tablespace]";

    This line of SQL code is needed for each table in the database.

  3. And that’s not all, because the indexes also have to be moved. That’s done with :


    ALTER INDEX "[index_name]" SET TABLESPACE "[new_tablespace]";

    This line of SQL code is needed for each index in the database.

Not so smooth, but hey it ain't that difficult =-P.

Tuesday, November 1, 2011

Oracle Database Memory Structures - intro

And for each memory structure in the SGA there's a bunch of info.

Oracle Database Server Architecture - intro

Reviewing concepts. A brief introduction to Oracle Database Server Architecture.

There's an entire world of information regarding this topic,  let's get busy.

Oracle Database Server Architecture - Intro

A very simple watchdog

I was looking for a convenient way to check periodically whether a process is running and if not, to fire it up. This is what I came with:

[sourcecode language="bash"]

if ps -ef | grep -v grep | grep spagobi2 ; then
exit 0
cd /usr/local/alterno/servidor/spagobi2/bin/
exit 0


This script is executed every five minutes through a cronjob with something like this:

[sourcecode language="bash"]


*/5 * * * * /root/


I don't like to run things as root, but that's not something for me to decide in this particular case.

There's yet another alternative to achieve this in a fancier way, using monit

I'll try it the next time.