Monday, June 16, 2025

SnowSQL config password needs to be enclosed in double quotes

Today I tested how to use snowsql to connect to SnowFlake. I put the connection credential in the config file. But I encountered the following error message in the intial attempt:



(venv) c:\Users\Yu\.snowsql>snowsql

250001 (08001): Failed to connect to DB: abcdefg-hij01234.snowflakecomputing.com:443. Incorrect username or password was specified.

If the error message is unclear, enable logging using -o log_level=DEBUG and see the log to find out the cause. Contact support for further help.

Goodbye!



It turned out I need to enclose the password with double quotes in the config file.

for example, in the c:\Users\Yu\.snowsql\.config file:

accountname =abcdefg-hij01234
username =denissun
# password =Ab0cdE725##FG0  -- not working
password = "Ab0cdE725##FG0"   -- works!


After put double quotes, I tested again and it succeed:

(venv) c:\Users\Yu\.snowsql>snowsql
* SnowSQL * v1.4.1
Type SQL statements or !help
denissun9883#COMPUTE_WH@(no database).(no schema)>!help
+------------+-------------------------------------------+-------------+--------------------------------------------------------------------------------------------+
| Command    | Use                                       | Aliases     | Description                                                                                |
|------------+-------------------------------------------+-------------+--------------------------------------------------------------------------------------------|
| !abort     | !abort <query id>                         |             | Abort a query                                                                              |
| !connect   | !connect <connection_name>                |             | Create a new connection                                                                    |
| !define    | !define <variable>=<value>                |             | Define a variable as the given value                                                       |
| !edit      | !edit <query>                             |             | Opens up a text editor. Useful for writing longer queries. Defaults to last query          |
| !exit      | !exit                                     | !disconnect | Drop the current connection                                                                |
| !help      | !help                                     | !helps, !h  | Show the client help.                                                                      |
| !options   | !options                                  | !opts       | Show all options and their values                                                          |
| !pause     | !pause                                    |             | Pauses running queries.                                                                    |
| !print     | !print <message>                          |             | Print given text                                                                           |
| !queries   | !queries help, <filter>=<value>, <filter> |             | Lists queries matching the specified filters. Write <!queries> help for a list of filters. |
| !quit      | !quit                                     | !q          | Drop all connections and quit SnowSQL                                                      |
| !rehash    | !rehash                                   |             | Refresh autocompletion                                                                     |
| !result    | !result <query id>                        |             | See the result of a query                                                                  |
| !set       | !set <option>=<value>                     |             | Set an option to the given value                                                           |
| !source    | !source <filename>, <url>                 | !load       | Execute given sql file                                                                     |
| !spool     | !spool <filename>, off                    |             | Turn on or off writing results to file                                                     |
| !system    | !system <system command>                  |             | Run a system command in the shell                                                          |
| !variables | !variables                                | !vars       | Show all variables and their values                                                        |
+------------+-------------------------------------------+-------------+--------------------------------------------------------------------------------------------+


It could be due to my password has special characters. 


Wednesday, March 09, 2016

MariaDB - a surprise dependent subquery execution plan

Welcome myself into the MySQL world. While working with MariaDB 10.0.21, there are two queries as shown below, one uses alias and the other does not, seems to me they should be no difference in terms of functionality or performance.

-- query 1 with alias a and b
delete a FROM mydb.v_audit_info a 
       WHERE a.api_seq in  
                (SELECT cart_line_seq
                   FROM mydb.v_shopping_cart_item b
                  WHERE cart_seq = 127883 );
-- query 2 without alias a and b
delete  FROM mydb.v_audit_info  
       WHERE api_seq in  
                (SELECT cart_line_seq
                   FROM mydb.v_shopping_cart_item 
                  WHERE cart_seq = 127883 );


However, when I check the execution plan, I get quite different results: -- query 1
+------+-------------+-------+------+-----------------------------------------------------------+---------------------------+---------+-----------------------------+------+-------------+
| id   | select_type | table | type | possible_keys                                             | key                       | key_len | ref                         | rows | Extra       |
+------+-------------+-------+------+-----------------------------------------------------------+---------------------------+---------+-----------------------------+------+-------------+
|    1 | PRIMARY     | b     | ref  | PRIMARY,v_shopping_cart_item_udx1,idx2_shopping_cart_item | v_shopping_cart_item_udx1 | 9       | const                       |    1 | Using index |
|    1 | PRIMARY     | a     | ref  | idx1_audit_info                                           | idx1_audit_info           | 9       | mydb.b.cart_line_seq |    3 |             |
+------+-------------+-------+------+-----------------------------------------------------------+---------------------------+---------+-----------------------------+------+-------------+

-- query 2
+------+--------------------+----------------------+-----------------+-----------------------------------------------------------+---------+---------+------+-------+-------------+
| id   | select_type        | table                | type            | possible_keys                                             | key     | key_len | ref  | rows  | Extra       |
+------+--------------------+----------------------+-----------------+-----------------------------------------------------------+---------+---------+------+-------+-------------+
|    1 | PRIMARY            | v_audit_info         | ALL             | NULL                                                      | NULL    | NULL    | NULL | 60646 | Using where |
|    2 | DEPENDENT SUBQUERY | v_shopping_cart_item | unique_subquery | PRIMARY,v_shopping_cart_item_udx1,idx2_shopping_cart_item | PRIMARY | 8       | func |     1 | Using where |
+------+--------------------+----------------------+-----------------+-----------------------------------------------------------+---------+---------+------

In the execution plan for query 1, we first execute the subquery and obtain a list of cart_line_seq and then access the v_audit_info with primary key.

In the execution plan for query 2, the subquery becomes dependent, we need to scan 60646 rows from v_audit_info and for each row, checking the condition in the subquery.

Using the profiling, it can clearly see the huge difference in Duration:

+----------+------------+--------------------------------------------------------------------+
| Query_ID | Duration   | Query                                                                                                                                                                                                              |
+----------+------------+--------------------------------------------------------------------+
|        1 | 0.00331659 | delete a FROM mydb.v_audit_info a
       WHERE a.api_seq in
                (SELECT cart_line_seq
                   FROM mydb.v_shopping_cart_item b
                  WHERE cart_seq = 127883 ) |
....


|       13 | 0.23299025 | delete  FROM mydb.v_audit_info
       WHERE api_seq in
                (SELECT cart_line_seq
                   FROM mydb.v_shopping_cart_item
                  WHERE cart_seq = 127883 )      |
+----------+------------+-------------------------------------------------------------------

Don't know why,but certainly I will suggest using the alias version.

Tuesday, April 14, 2015

Monitoring Oracle GoldenGate Latency

Today I've implemented an approach to monitor OGG latency. Here I will describe what I've done.  

1. Create a table gg_latency in source and target databases:
create table gg_latency
(
  extr varchar2(10),
  pump varchar2(10),
  repl varchar2(10),
  update_time date
);



alter table gg_latency add constraint gg_latency_pk primary key(extr, pump, repl) using index;

2. Create a procedure that is used to update the latency table: 

create or replace procedure proc_update_gg_latency
is
begin
  for rec in ( select * from gg_latency)
  loop
     update gg_latency set update_time=sysdate where extr=rec.extr and pump=rec.pump and repl = rec.repl;
     commit;
  end loop;
end;
/
3. Populate the table with every possible combination of the processing group names:

For example, in my replication enviroment, at source I have three Extract groups, three Pump groups, at target I have 15 Replicat groups :
insert into gg_latency(extr, pump, repl) values('ECRUDR1', 'PCRUDR1', 'CRURDR1');
insert into gg_latency(extr, pump, repl) values('ECRUDR1', 'PCRUDR1', 'CRURDR1A');
insert into gg_latency(extr, pump, repl) values('ECRUDR1', 'PCRUDR1', 'CRURDR1B');
insert into gg_latency(extr, pump, repl) values('ECRUDR1', 'PCRUDR1', 'CRURDR1C');
insert into gg_latency(extr, pump, repl) values('ECRUDR1', 'PCRUDR1', 'CRURDR1D');
  
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2A');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2B');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2C');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2D');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2F');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2G');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2H');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2M');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2N');

insert into gg_latency(extr, pump, repl) values('ECRUDR3', 'PCRUDR3', 'CRURDR3');

4. For each EXTRACT group parameter file at source, add the TABLE clause with WHERE option for the GG_LATENCY table , e.g.

    TABLE DB_ADMIN.GG_LATENCY WHERE ( EXTR="ECRUDR1");

Note: do this for all the EXTRACT groups


5. For each PUMP group parameter file at source, add the TABLE clause with WHERE option for the GG_LATENCY table , e.g.
    TABLE DB_ADMIN.GG_LATENCY, WHERE (PUMP="PCRUDR1");
Note: add the line before the PASSTHRU if exists, do this for all the PUMP groups


 6. For each REPLICAT group parameter file at target, add MAP clause with WHERE option for the GG_LATENCY table , e.g.

MAP DB_ADMIN.GG_LATENCY, TARGET DB_ADMIN.GG_LATENCY, WHERE (REPL='CRURDR1');
 
Note: do this for all the REPLICAT groups. In 12c OGG, single quotation mark should be used for literal string.

7. Bounce all processes as parameter files are modified 


8. Create a scheduler job to update the latency table every minute
begin
DBMS_SCHEDULER.create_job (
   job_name         => 'UPDATE_GG_LATENCY_TABLE',
    job_type        => 'PLSQL_BLOCK',
    job_action      => 'BEGIN db_admin.proc_update_gg_latency; END;',
    start_date      => trunc(sysdate, 'HH24'),
    repeat_interval => 'freq=minutely',
    end_date        => NULL,
    enabled         => TRUE
    );
end;
/

9. Check latency by the following query at target:
 
SQL> select extr, pump, repl, update_time, round((sysdate - update_time) *24*60) latency_mins from gg_latency;

EXTR       PUMP       REPL       UPDATE_TIME          LATENCY_MINS
---------- ---------- ---------- -------------------- ------------
ECRUDR1    PCRUDR1    CRURDR1D   14-Apr-2015 12:46:00            1
ECRUDR1    PCRUDR1    CRURDR1B   14-Apr-2015 12:46:00            1
ECRUDR1    PCRUDR1    CRURDR1A   14-Apr-2015 12:46:00            1
ECRUDR2    PCRUDR2    CRURDR2D   14-Apr-2015 12:46:00            1
ECRUDR1    PCRUDR1    CRURDR1C   14-Apr-2015 12:46:00            1
ECRUDR1    PCRUDR1    CRURDR1    14-Apr-2015 12:46:00            1
ECRUDR2    PCRUDR2    CRURDR2H   14-Apr-2015 12:46:00            1
ECRUDR2    PCRUDR2    CRURDR2C   14-Apr-2015 12:46:00            1
ECRUDR2    PCRUDR2    CRURDR2N   14-Apr-2015 12:46:00            1
ECRUDR2    PCRUDR2    CRURDR2B   14-Apr-2015 12:46:00            1
ECRUDR2    PCRUDR2    CRURDR2A   14-Apr-2015 12:46:00            1
ECRUDR2    PCRUDR2    CRURDR2M   14-Apr-2015 12:46:00            1
ECRUDR2    PCRUDR2    CRURDR2G   14-Apr-2015 12:46:00            1
ECRUDR2    PCRUDR2    CRURDR2F   14-Apr-2015 12:46:00            1
ECRUDR3    PCRUDR3    CRURDR3    14-Apr-2015 12:46:00            1

15 rows selected.

Note: As we update every minute, the smallest unit for latency is a minute.

Friday, March 13, 2015

Just another example - Writing efficient SQL with analytic function

I have a monitoring job set up to alert me when the buffer gets per execution of a SQL above certain threshold. Today I received one as below:
=================================================
start :  13-MAR-15 02.00.23.885 PM
end   :  13-MAR-15 03.00.48.168 PM
snapid from     107851  to     107853
=================================================

********************************************************
list sql with buffer gets per execution > 100000
********************************************************


!!!! ##########  Expensive SQL found ####### !!!!                               
instance   : 1                                                                  
sql_id     : drdbm833ack3c                                                      
Buffer get : 260358                                                             
Execs      : 1                                                                  
BG/exec    : 260358                                                             
Gets/row   : 700                                                                
SQL TEXT   :                                                                    
SELECT p.rec_id, p.CREATE_DATE,                                                 
p.ordered_zzzyyy_type,p.qualified_zzzyyy_type,p.req_type,p.c_transid,p.status,p.
ERR_CODE,p.ERR_DESC,p.SVC_STATE,p.IS_DOWNGRADED,p.ETF_WAIVE_FLAG  FROM          
BL_XYZ_CHNG_PKG p WHERE p.req_type='R'  AND p.status || ''='IN'  AND            
p.jrs_indicator IN ('1','2','3')  and p.c_transid IN (SELECT MAX(C_TRANSID) FROM
BL_XYZ_CHNG_PKG  GROUP BY rec_id)                                               
 
...
After checking this sql, I've realized this may be a classical example where using analytic function can save resource.

 Original one -- Full table Scan on the same table twice 259k gets
SQL> SELECT p.rec_id,
  2         p.CREATE_DATE,
  3         p.ordered_zzzyyy_type,
  4         p.qualified_zzzyyy_type,
  5         p.req_type ,
  6         p.c_transid,p.status,
  7         p.ERR_CODE,p.ERR_DESC,p.SVC_STATE,p.IS_DOWNGRADED,p.ETF_WAIVE_FLAG
  8    FROM
  9         xyzu.BL_XYZ_CHNG_PKG p
 10  WHERE p.req_type='R'
 11     AND p.status || ''='IN'
 12     AND p.jrs_indicator IN ('1','2','3')
 13     and p.c_transid IN
 14          (SELECT MAX(C_TRANSID) FROM BL_XYZ_CHNG_PKG  GROUP BY rec_id)
 15  ;

242 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3808252951

------------------------------------------------------------------------------------------------
| Id  | Operation            | Name            | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                 |  1613 |   127K|       | 83112   (3)| 00:16:38 |
|*  1 |  HASH JOIN           |                 |  1613 |   127K|       | 83112   (3)| 00:16:38 |
|*  2 |   TABLE ACCESS FULL  | BL_XYZ_CHNG_PKG |  1613 |   107K|       | 25463   (3)| 00:05:06 |
|   3 |   VIEW               | VW_NSO_1        |  6443K|    79M|       | 57578   (2)| 00:11:31 |
|   4 |    HASH GROUP BY     |                 |  6443K|    86M|   148M| 57578   (2)| 00:11:31 |
|   5 |     TABLE ACCESS FULL| BL_XYZ_CHNG_PKG |  6459K|    86M|       | 25154   (2)| 00:05:02 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("P"."C_TRANSID"="$nso_col_1")
   2 - filter("P"."REQ_TYPE"='R' AND "P"."STATUS"||''='IN' AND ("P"."JRS_INDICATOR"='1'
              OR "P"."JRS_INDICATOR"='2' OR "P"."JRS_INDICATOR"='3'))


Statistics
----------------------------------------------------------
         95  recursive calls
          0  db block gets
     259618  consistent gets
     267154  physical reads
          0  redo size
      10348  bytes sent via SQL*Net to client
        261  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        242  rows processed

Rewrite with rank() over partition by  construct - one Full table scan with 129k

SQL> SELECT rec_id,
  2         CREATE_DATE,
  3         ordered_zzzyyy_type,
  4         qualified_zzzyyy_type,
  5         req_type ,
  6         c_transid,status,
  7         ERR_CODE,ERR_DESC,SVC_STATE,IS_DOWNGRADED,ETF_WAIVE_FLAG
  8  from
  9  (
 10     SELECT p.rec_id,
 11            p.CREATE_DATE,
 12            p.ordered_zzzyyy_type,
 13            p.qualified_zzzyyy_type,
 14            p.req_type ,
 15            p.c_transid,p.status,
 16            p.ERR_CODE,p.ERR_DESC,p.SVC_STATE,p.IS_DOWNGRADED,p.ETF_WAIVE_FLAG,
 17            rank() over ( partition by rec_id order by c_transid desc ) rank
 18       FROM
 19            xyzu.BL_XYZ_CHNG_PKG p
 20     WHERE p.req_type='R'
 21        AND p.status || ''='IN'
 22        AND p.jrs_indicator IN ('1','2','3')
 23  )  A
 24  where rank=1;

242 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3870177004

--------------------------------------------------------------------------------------------
| Id  | Operation                | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |                 |  1613 |   546K| 25464   (3)| 00:05:06 |
|*  1 |  VIEW                    |                 |  1613 |   546K| 25464   (3)| 00:05:06 |
|*  2 |   WINDOW SORT PUSHED RANK|                 |  1613 |   107K| 25464   (3)| 00:05:06 |
|*  3 |    TABLE ACCESS FULL     | BL_XYZ_CHNG_PKG |  1613 |   107K| 25463   (3)| 00:05:06 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("RANK"=1)
   2 - filter(RANK() OVER ( PARTITION BY "REC_ID" ORDER BY
              INTERNAL_FUNCTION("C_TRANSID") DESC )<=1)
   3 - filter("P"."REQ_TYPE"='R' AND "P"."STATUS"||''='IN' AND
              ("P"."JRS_INDICATOR"='1' OR "P"."JRS_INDICATOR"='2' OR "P"."JRS_INDICATOR"='3'))


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
     129827  consistent gets
     127794  physical reads
          0  redo size
      10184  bytes sent via SQL*Net to client
        261  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
        242  rows processed

Friday, March 06, 2015

Killing DBSNMP sessions not easy

We had a performane issue for a prodcution database today. The CPU utilization went up. When I logged into the server and tried to log into the database as sysdba, I got ORA-00020 error:
$sqlplus "/ as sysdba"

SQL*Plus: Release 11.2.0.2.0 Production on Fri Mar 6 10:55:10 2015

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

ERROR:
ORA-00020: maximum number of processes (700) exceeded

Another DBA reported that the session count for DBSNMP user went up high. This user is from OEM. I stopped the OEM agent and he killed all DBSNMP sessions at database level. However, almost 1 hour later, we still saw high DBSNMP session count:
Username           COUNT(*)
---------------- ----------
DBSNMP                  352
XXXX100                   1
XXXXSELECT                1
XXXXUSER                232
IVIEW2XXXX               25
ORACLE                    5
XX09195                   1
XXX6621                   6
XXX3294                   1
XXDBMON                   1
                         36
note: some usernames are marked to protect ther innocence.

Of course, we can see all DBSNMP sessions marked as "KILLED"
SQL> select username, status from v$session where username='DBSNMP';

Username         Status
---------------- --------
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
DBSNMP           KILLED
...

So It appeared to be a good idea to kill the processes at OS level. I fired the following script I usually use to get the KILL command:
set pages 0
spool /tmp/kill_os

select 'kill -9 '|| p.spid
from v$process p , v$session s
where p.background is null
and s.paddr = p.addr (+)
and p.spid is not null
and s.username='DBSNMP'
order by p.spid
/
Guess what, to my surprise, it returned nothing !. I got the count for those "LOCAL=NO" processes as below:
$ps -ef |grep "LOCAL=NO" | wc -l

659

Now the problem was how could I get the PIDs out of those 659 processes that belong to DBSNMP ? What I did finally was:

1. load the PIDs of all those "LOCAL=NO" processes into a table db_admin.pid with one number column called ID:
ps -ef | grep LOCAL=NO | awk '{print "insert into db_admin.pid values(" $2 "); }'

2. Run the following query to generate the KILL commands

select 'kill -9 ' ||  id from db_admin.pid x 
where not exists (
select 1 
from v$session s, v$process p
where s.paddr = p.addr (+)
  and p.spid = x.id
)
order by id
/

This trick worked. To prevent future troubles, we've also set up a profile limit for the number of sessions the user DBSNMP can have. We still don't know what caused the session count high though.