Tuesday, July 2, 2013

[how to] Can a database restore fail after it has begun?

[how to] Can a database restore fail after it has begun?


Can a database restore fail after it has begun?

Posted: 02 Jul 2013 06:57 PM PDT

I've noticed that a lot of times if a database restore is going to fail, it will fail before it even starts restoring in SQL Server 2008 Management Studio (displaying an error about space or something similar). I need to set up a schedule to restore a database overnight, but am not sure if I'll need to monitor it overnight or not. The hardware is stable enough that I believe this won't be a point of failure.

A better question might be: What checks are done against a backup prior to the restore process beginning?

MySQL Security permissions

Posted: 02 Jul 2013 06:08 PM PDT

If mysql possible, block user command execute:

DESC "table"
SHOW TABLES
SHOW CREATE TABLE "table"

Or need write mysql transport port wrapper ?

GRANT ON TABLE not help "hide user" database structure
My developers no need allow dump database structure and "show tables name"

/etc/my.cnf

[mysqld]  skip-show-database  skip-networking  bind-address=127.0.0.1  #skip-show-tables  #skip-table-stucture-show  

Database query to determine privileges given to public [on hold]

Posted: 02 Jul 2013 06:23 PM PDT

command to find out all object and system privileges granted to public

command to find out users and roles having the following privileges:

SELECT ANY, CREATE ANY, ALTER ANY, DROP ANY, INSERT ANY, UPDATE ANY, DELETE ANY, EXECUTE ANY, DEBUG ANY, DEQUEUE ANY, ENQUEUE ANY, FORCE ANY, GRANT ANY, LOCK ANY, MANAGE ANY, BACKUP ANY, ANALYZE ANY, COMMENT ANY, AUDIT ANY

Oracle Select Query Over Database Link Suddenly Hangs Endlessly

Posted: 02 Jul 2013 02:29 PM PDT

I have a query that's been in use for years and today it started hanging endlessly. I took the query apart and found the piece causing it to hang is a simple join on a primary key that should run like lightning. If I remove problem table the query runs in less than a second.

Synthetically it looks like this: (The problem table is remote2)

SELECT fields   FROM local_table1, local_table2, ... (more tables)       remote_table1@dblink remote1, remote_table2@dblink remote2   WHERE        ...clauses to tie the local tables together with each other and remote_table1...         remote2.primary_key = remote1.primary_key_of_remote2         AND remote2.afield <> 'avoid this value'   

It's acting like it stuck on some kind of DB lock, but neither myself nor the DBA can find it. Is there some kind of special queuing mechanism used by remote database links that is locking this up?

Restoring a backed up database fails

Posted: 02 Jul 2013 06:04 PM PDT

I have backed up a database (full backup) and was attempting to restore as a new database (different name). However when attempting to restore it I get the following error:

The transaction log for database 'L-Test' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases.

Not really sure why that's happening. The size of the database is only 37.50MB and the logfile is set to auto grow as well.

Partitioning / indexing an extremely large table

Posted: 02 Jul 2013 03:15 PM PDT

I'm working on indexing and partitioning a single data warehouse table that weighs in around 500 GB. The table is a heap, has over a hundred TEXT columns, and the TEXT_IN_ROW option is enabled. I didn't design this table, and I don't have any capacity to change it in the immediate future.

I've been tasked with partitioning it. We're tackling this using a copy of the database on a test server. It can push about 2 GB per second to the SSD RAID arrays, so I/O isn't a significant bottleneck, and it's got 16 cores (2 NUMA nodes), and 64 GB of RAM.

My approach is to disable all the nonclustered indexes, create a partition function and partition scheme (about 12 partitions, all on the PRIMARY filegroup - they're using this to enable rolling maintenance and provide more localized inserts for nightly ETL, and not to distribute I/O), then build a clustered index for the table using this partition scheme.

I'm creating the clustered index and partitioning the table as follows:

CREATE CLUSTERED INDEX CX_DailyTable ON DailyTable (LoadDate, SeqNumber)     WITH (SORT_IN_TEMPDB = ON) ON monthly_on_primary (LoadDate)  

Obviously, it's taking a long time (3 hours so far as of this post), and I certainly don't expect it to be quick. What worries me slightly is that tempdb is now pushing nearly 1 TB and steadily climbing, despite the current table being around half that size. The MS docs I've read suggest the tempdb space usage should be about the size of the final table/clustered index.

http://msdn.microsoft.com/en-us/library/ms188281.aspx

If SORT_IN_TEMPDB is set to ON, there must be sufficient free space in tempdb to store the sort runs, and sufficient free space in the destination filegroup to store the final index structure. The sort runs contain the leaf rows of the index.

Are their estimates incorrect? Is tempdb being used for substantially more than just the sort runs? Or is creating this clustered index somehow doubling the size of the table? (Seems pretty unlikely; it's a rather wide table, and I estimate we're getting an extra 4-8 bytes per row, plus non-leaf pages by adding a clustered index.)

Default trace enabled but not active

Posted: 02 Jul 2013 12:53 PM PDT

When I query the configuration of the default trace, it shows enabled:

exec sp_configure 'default trace enabled';  -->  name                    minimum  maximum  config_value  run_value  default trace enabled         0        1             1          1  

But when I query sys.traces for the path, it returns an empty rowset:

select * from sys.traces;  

What could explain the absence of the enabled trace?

Is it possible to reduce a group to a row based on some criteria?

Posted: 02 Jul 2013 12:05 PM PDT

I'd like to perform a select on a table involving a GROUP BY such that all rows that share the same set of identifiers are grouped together, but I want to reduce the group to one of the grouped rows based on some criteria. For example, the maximum date_added. However, there are other fields of data that could be different among the grouped rows. I want all of those columns to resolve to the row with the max date_added as well.

I realize to get the max date_added I could simply SELECT MAX(date_added), but that is just a column-level aggregate function. Is there any way I can resolve the entire row in a group?

Conceptually, if you imagine each group as a separate table, I want to SELECT * WHERE date_added=(SELECT MAX(date_added)) from that group table.

How to run steps in one job sequentially

Posted: 02 Jul 2013 07:29 PM PDT

I have one job on SQL server Job Agent that has 3 steps. I want to the job to run starting with step one. After step one is complete and only when it completes I want step 2 to start running and so for step 3

Returning a True or False from an IIF statement

Posted: 02 Jul 2013 09:28 AM PDT

I have a query with the following IIF statement

OnlineEnrolment: IIf([qry_ebs_final_CAN-ENROL-ONLINE].[SP_CAN_ENROL]='Y',True,False)

It gives the correct results, but the format that it gives them is what I am struggling with. It responds back with 0 or -1. I want it to respond TRUE or FALSE.

How can I change this other than making TRUE and FALSE strings? I still want them as boolean values.

Thanks

Kieran

Zero conversion in sql

Posted: 02 Jul 2013 07:31 PM PDT

Char convertion in Oracle have a pattern: to_char(number,'pattern') and provide option "s" to take sign to result, for ex: select to_char(10,'S999') from dual; will return +10, but when i check select to_char(0,'S999') from dual; it return +0 Anyone could explain why :-)

boolean purpose column char(0) or tinyint(1)

Posted: 02 Jul 2013 07:53 AM PDT

For Boolean purpose column which one is better: nullable char(0) or tinyint(1).

I know bool is an alias for tinyint(1) , but in "High Performance MySQL" book that published by O'reilly said:

"If you want to store a true/false value in a single bit of storage space, another option is to create a nullable CHAR(0) column. This column is capable of storing either the absence of a value (NULL) or a zero-length value (the empty string). "

which one is better for size, performance, indexing or ...

UPDATE: I found this link useful for this question: Efficient Boolean value storage for Innodb Tables

Dealing with identity ranges for transactional replication

Posted: 02 Jul 2013 09:40 AM PDT

I've noticed that when you set up a transactional replication, SQL Server will set identity range management to manual. What this means is that in my subscription database, when I try to insert a new record into a table whose PK is an identity column, it will give me an error and say that it tried to insert a PK of "1", "2", "3", etc. This is because the current identity value for all identity columns on the subscriber gets reset to 1 instead of staying at what it was on the publisher.

I understand why SQL Server does this - you're supposed to leave the subscriber table as read-only. However, my scenario is a little unorthodox - I update my subscriber from time to time through replication, make an immediate backup of that DB, then I want to do some updates to the subscriber that WON'T be pushed back to the publisher, then when I go to update the subscriber again, I restore its database from the earlier backup and pull the latest updates. Because I want to do updates to the subscriber in between these updates ('temporary deltas' if you will), I need the identity column to work and not to reset to 1 when replicated.

I tried turning on automatic identity range management when setting up my publication, but that just gives me the following error when I try to add a table to the publication:

Msg 21231, Level 16, State 1, Procedure sp_MSrepl_addarticle, Line 2243
Automatic identity range support is useful only for publications that allow updating subscribers.

Is there any way I can get round this problem? I do kind of want to present this replication to SQL Server as if it were read-only at the subscriber end because I don't plan on making updates that will be pushed back to the publisher, but I do want to make temporary updates that will be erased before the next replication.

I have also considered that snapshot replication might be a more appropriate method than transactional replication for my usage pattern, but the trouble is that snapshot replication requires sending the whole darn DB every single update; because I'm planning on taking an immediate backup of the DB after the latest replication, I shouldn't need to do that whole transfer every time; just the changes since last time.

What is the difference between TCP port and TCP dynamic port?

Posted: 02 Jul 2013 07:54 PM PDT

Can anyone tell me the difference?

I tried to set either 1 port and left another blank also can work. I wonder what is the difference, or anything I missed out?

Get old database off external hard drive

Posted: 02 Jul 2013 09:47 AM PDT

My hard drive crashed yesterday, luckily I had a back up. I didn't restore from backup though cause I needed a fresh system anyways. However there was a development database I would like back.

Is there a way to grab an old database file from my back up and somehow get that database onto my new localhost?

Auto increment value needs to be changed inside trigger

Posted: 02 Jul 2013 10:32 AM PDT

I have a slight problem with a table that I have converted which used to be MyISAM, I have now made it a InnoDB table.

Original structure:

id varchar(15),  seqno int(11) auto increment   Joint PK index on the above 2 fields  

New structure:

auto_id INT (11) auto increment,  id varchar(15),  seqno int(11)  

The original structure had a joint PK on id and seqno with seqno being auto incremented for each unique id value. It would then return the seqno as the last_insert_id in my web application.

With the new innodb structure, because innodb doesnt support joint primary keys, I've had to add in an auto_id which now has the auto increment.

I then implemented a trigger to generate the seqno unique per id value. My question is how do I set the auto increment id so that it returns in place of the auto_id value? This trigger would be called when I insert into my_table, so that it can calculate the next seqno value to insert with.

DROP TRIGGER IF EXISTS `innodb_seqno_trigger`;  //    CREATE TRIGGER `innodb_seqno_trigger` BEFORE INSERT ON `my_table`   FOR EACH ROW BEGIN    SET @newseqno = 0;    SELECT MAX(seqno) INTO @newseqno  FROM my_table  WHERE id = NEW.id;    SET NEW.seqno = (@newseqno + 1);  /* HERE I NEED HELP PLEASE TO GET THE AUTO INCREMENT VALUE TO ACTUALLY RETURN NEW.seqno */  END;  //  

Thanks Peter

map user in oracle 10g

Posted: 02 Jul 2013 12:18 PM PDT

here is the scenario:

There are two schema created in oracle 10g, say db 1 and db 2. db1 can be accessed by username : sys with password and db2 can be accessed by username : test with password.

How is it possible to access db1 using username : test with password? In MSSQL there is an option for user mapping to different database. is there any such options in oracle 10g to do the same?

What are the valid formats of a PostgreSQL schema?

Posted: 02 Jul 2013 09:40 AM PDT

I can't seem to find documentation that describes the valid formats of a PostgreSQL schema. I know that a schema cannot:

  • start with a number
  • have spaces
  • start with pg_

What else? Where should I look?

What are the default settings for accessing MySQL remotely?

Posted: 02 Jul 2013 10:11 AM PDT

What is default settings in MYSQL for Remote access?

In CentOS the file /etc/my.cnf does not have skip-networking or bind-address. MySQL Version is 5.5, does it mean the server is enabled for remote access?

MySQL restart failed after adding skip-networking and bind-address=127.0.0.1 in my my.cnf file, so how can I disable remote access?

my.cnf

[mysqld]  skip-networking  innodb_file_per_table=1  innodb_file_format=barracuda  innodb_strict_mode=1  default-storage-engine=MyISAM  long_query_time=5  log_slow_queries=ON  thread_cache_size=32  key_buffer_size=64M  table_open_cache=1000  max_connections=300  query_cache_type=2  query_cache_limit=32M  query_cache_size=160M  max_heap_table_size=256M  tmp_table_size=256M  wait_timeout=360  interactive_timeout=360  max_allowed_packet=16M  sort_buffer_size=8M  innodb_buffer_pool_size=256M  innodb_log_file_size=64M  

what is the best practice when making changes to the the application and database?

Posted: 02 Jul 2013 08:39 AM PDT

I need to make changes to an application which utilizes a database that is replicated across locations. I was wondering what the best practice is when working with databases and making changes to the database.

I will be backing up the database to dev environment and making changes to both the application and the database. Then when testing the application and the database I will likely make changes to the records. In that case, would it make sense run the t-sqls that I applied in the dev to the production database?

restrict user host settings to socket connection only

Posted: 02 Jul 2013 12:20 PM PDT

Is there a way to restrict a user's host setting so that (s)he can only connect by socket and not TCP/IP? I'd like to have this setting for the root account.

edit: As Abdul Manaf pointed out skip-networking can be used to turn off TCP/IP connectivity altogether. But can it be done on a user account basis?

Unable to create a new listener in oracle 11g

Posted: 02 Jul 2013 12:45 PM PDT

In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager.

Why doesn't DELETE + REORG free diskspace (DB2)?

Posted: 02 Jul 2013 01:00 PM PDT

In DB2 I have a table containing large binary data. Now i purged the whole table and ran runstats, reorg, runstats, but the amount of disk space taken does not change. What could be wrong here?

The table resides in its own tablespace which I created as follows:

CREATE BUFFERPOOL "MY_BP" SIZE 250 AUTOMATIC PAGESIZE 4096;  CREATE LARGE TABLESPACE MY_TBS IN DATABASE PARTITION GROUP IBMDEFAULTGROUP PAGESIZE 4096 MANAGED BY AUTOMATIC STORAGE EXTENTSIZE 64 PREFETCHSIZE 64 BUFFERPOOL MY_BP OVERHEAD 10.500000 TRANSFERRATE 0.140000 FILE SYSTEM CACHING;  

I deleted/reorged as follows:

DELETE FROM MY_TBL  RUNSTATS ON TABLE MY_TBL WITH DISTRIBUTION AND DETAILED INDEXES ALL  REORG TABLE MY_TBL  RUNSTATS ON TABLE MY_TABLE WITH DISTRIBUTION AND DETAILED INDEXES ALL  ALTER TABLESPACE MY_TBS REDUCE  

The table MY_TBL took up 2.5GB before all that and after deleting/reorging it uses only 3 MB less.

FWIW: I'm running DB2/NT v9.5.2.

event scheduler not called

Posted: 02 Jul 2013 02:46 PM PDT

I had created one event scheduler which looks like this

mysql> show create event event1      -> ;  +--------+----------+-----------+--------------------------------------------------------------------------------------------+  | Event  | sql_mode | time_zone | Create Event  | character_set_client | collation_connection | Database Collation |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+    | event1 |          | SYSTEM    | CREATE DEFINER=`root`@`localhost` EVENT `event1` ON SCHEDULE EVERY 1 MONTH STARTS '2013-02-02 00:00:00' ON COMPLETION NOT PRESERVE ENABLE DO BEGIN   update samp set col1 =col1  + 1; END | utf8                 | utf8_general_ci      | latin1_swedish_ci  |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+---------  -----------+  1 row in set (0.00 sec)  

This events has not called on 1st of month. So i tried show processlist\g to find it is runnung or not; it showed following o/p

mysql> show processlist\g;  +-----+-----------------+-----------------+---------------+---------+------+---     | Id  | User            | Host            | db            | Command | Time | State                       | Info             |  +-----+-----------------+-----------------+---------------+---------+------+---  | 136 | event_scheduler | localhost       | NULL          | Daemon  | 1855 | Waiting for next activation | NULL    |  

so NULL in db col means that no DB is assigned to it?

Please help me to solve it.

FETCH API_CURSOR causing open transaction in tempDB

Posted: 02 Jul 2013 04:46 PM PDT

A select statement run from Oracle 11gR1 to SQL Server 2005 using Gateway leaves an open transaction in tempdb. The user is a datareader in Database XYZ. Not sure why the open tran is in tempDB when the statement run is SELECT.

Any one had seen this issue before ?

Thanks in advance sekhar

Synchronising SQL tables in two locations [on hold]

Posted: 02 Jul 2013 09:34 AM PDT

We have a bit of a conundrum at the moment.

ANSI SQL solutions please.

We have two databases located at two different geographic locations and will be using the web for all traffic between the two locations. We have a few tables common between the two databases that need to be sync'd periodically (i.e. at the end of the day). With such syncs, we are worried about many things like security, congested traffic (could be thousands of records), and data integrity.

The end game is to store files in each record (up to 10MB) so the sync needs to be efficient. (Not a priority at the moment.)

There may be records that need to be sync'd both directions (i.e a column on one updated, then a column on two updated, the one record needs these two changes).

What software/hardware options are there out there?

Is there another way without using triggers (want to get away from this)?

Thanks, been a while since I dealt with these technologies.

BACKUP failed to complete - with spaces, for no reason

Posted: 02 Jul 2013 11:26 AM PDT

Every day my database backup fails for some databases and does fine for others. I'm calling the backup through a Stored Procedure that runs scheduled in a job. Tonight I'm going to run the procedure in a query window with a WAITFOR in place just to check. In the \Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG the message is:

2011-12-28 07:49:52.43 Backup      Error: 3041, Severity: 16, State: 1.  2011-12-28 07:49:52.43 Backup      BACKUP failed to complete the command BACKUP DATABASE                                                                                                                                 . Check the backup application log for detailed messages.  

Obs: the log actually contains all these spaces and no further info.

No comments:

Post a Comment

Search This Blog