Showing posts with label field. Show all posts
Showing posts with label field. Show all posts

Thursday, March 22, 2012

creating text files using bcp

I am using BCP and it works just fine. I get a txt file with tab seperated fields. Can i specify the character positions or lengths of each field so that all my fields line up at the same positions.

Help is appreciated.

I created a stored procedure with the below
declare @.filename varchar(50),
@.bcpcommand varchar(2000)

set @.filename = 'c:\report\media.txt'
print @.filename
set @.bcpcommand = 'bcp "select * from table" queryout "'+ @.filename -U -P'
exec master..xp_cmdshell @.bcpCommandYou could format your select as needed.|||Can u elaborate more ...how do i format my sql to seperate the fields.

like i want one field on position 1-10 and the seond from position 11-40 and so on...

Thanks|||Originally posted by hp1000
Can u elaborate more ...how do i format my sql to seperate the fields.

like i want one field on position 1-10 and the seond from position 11-40 and so on...

Thanks
Just create one long field or as many as you need like this:

select field1+replicate(' ',25-datalength(field1)+...
from table|||I am using

set @.bcpcommand = 'bcp "select medium_name + replicate(' ',10-datalength(medium_name)) from database..report_tbl" queryout "'+ @.filename + '" -U sa -P -c'

I get Line 6: Incorrect syntax near '

How can i escape the single quotes

Please help

Thsi method should work for what i want to do .

Thanks|||Originally posted by snail
Just create one long field or as many as you need like this:

select field1+replicate(' ',25-datalength(field1)+...
from table

I am using the above but it does not work with NULL values . How do i make it to work with null values. Help is very much appreciated.

Thanks|||If you attempt to format your fields within BCP...QUERYOUT you may very quickly reach the limitation on the length of the command line (I don't remember what it is, but you can check.)

What you can do instead, is create a stored procedure where you'd format whatever and whichever way you want, and in your BCP specify a call to that procedure along with QUERYOUT.|||Here's how I do it - much easier to read and manage changes.

Much like a structure or type, I create a temporary table in a stored procedure (TABLE vars won't work, as BCP won;t be able to see them), using CHAR data types, one field for each output field. I have a user-defined function for zero-padding, and one for right-justifying (space padding). I populate that table with all my data, then do a BCP statement consisting of SELECT field1 + field2 + field3 etc.

This gives my the spacing I want, and that one create table at the beginning of the procedure allows me to easily control the number spaces each field gets.

I can post a proc if it would be easier to understand.

-b|||My table def looks like this (for an SAP feed):

CREATE TABLE ##sap_table(sales_organization CHAR(4) NOT NULL,
distribution_channel CHAR(2) NOT NULL,
division CHAR(2) NOT NULL,
filler_1 CHAR(2) NOT NULL,
sold_to CHAR(8) NOT NULL,
delivering_plant CHAR(4) NOT NULL,
profit_center CHAR(10) NOT NULL,
material_code CHAR(10) NOT NULL,
pricing_date CHAR(8) NOT NULL,
charge_type CHAR(4) NOT NULL,
order_reason CHAR(3) NOT NULL,
project_number CHAR(24) NOT NULL,
contract_number CHAR(10) NOT NULL,
purchase_order_number CHAR(12) NOT NULL,
internal_sales_reference CHAR(12) NOT NULL,
usage_type CHAR(3) NOT NULL,
usage_detail CHAR(60) NOT NULL,
requestor_name CHAR(40) NOT NULL,
user_number CHAR(10) NOT NULL,
[user_name] CHAR(40) NOT NULL,
comments CHAR(100) NULL,
quantity CHAR(15) NOT NULL,
unit_of_measure CHAR(3) NOT NULL,
currency CHAR(5) NOT NULL,
extended_amount CHAR(15) NOT NULL,
customer_cost_object CHAR(40) NOT NULL,
cco_flag CHAR(3) NOT NULL,
xy_flag CHAR(1) NOT NULL,
filler_2 CHAR(6) NOT NULL )|||If you want fixed width data, why not use a format file?|||Thanks for all ur feedback . I am using coalesce and datalength functions and it did the trick.

I create a view with the fields i want in the format i want and then do the select on the view with my bcp and it writes to the text file just fine.

Thanks a lot,

Creating temp table in SP - what about indexes ?

I am creating temporary tables in a Stored Procedure, to create paging for a
web frontend.
I do this by creating a temp table with a "identity" field, then copying all
relevant data into the temp table, and then in the end I select out the
actual "page" from the total temp table, ie. from record ID 100 to 150.
During this process, I was wondering if my temp table should have an index,
for optimal performance ?
If my temp table has ie. 5000 records, and I want to select and return only
records from 3500 to 3550, I select with a "where clause" specifying only
records from 3500 to 3550, using the identity field, which automatically
works as a "record counter" for my totalt recordset.
But should I create an index on the identity field, before filling the temp
table with records, and then selecting the actual page to return ?
I mean, if I query using a where clause specifying an column without an
index, wouldnt this create table scans ?
-
Regards,
Tony G.I would think creating the clustered index before filling the table would
give you optimal performance as the data is ordered on insert.
Test each scenario and then you'll be satisfied that you have the best metho
d.
regards,
Mark Baekdal
http://www.dbghost.com
http://www.innovartis.co.uk
+44 (0)208 241 1762
Build, Comparison and Synchronization from Source Control = Database change
management for SQL Server
"Tony Godt" wrote:

> I am creating temporary tables in a Stored Procedure, to create paging for
a
> web frontend.
> I do this by creating a temp table with a "identity" field, then copying a
ll
> relevant data into the temp table, and then in the end I select out the
> actual "page" from the total temp table, ie. from record ID 100 to 150.
> During this process, I was wondering if my temp table should have an index
,
> for optimal performance ?
> If my temp table has ie. 5000 records, and I want to select and return onl
y
> records from 3500 to 3550, I select with a "where clause" specifying only
> records from 3500 to 3550, using the identity field, which automatically
> works as a "record counter" for my totalt recordset.
> But should I create an index on the identity field, before filling the tem
p
> table with records, and then selecting the actual page to return ?
> I mean, if I query using a where clause specifying an column without an
> index, wouldnt this create table scans ?
>
> -
> Regards,
> Tony G.|||> I mean, if I query using a where clause specifying an column without an
> index, wouldn't this create table scans ?
Correct.
I'm a bit through. You say that you create a temp table in the proc
. But how do you re-use
this temp table? I hope that you don't create the temp table, populate it an
d then select from it
each time a user want to display a page?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Tony Godt" <TonyGodt@.discussions.microsoft.com> wrote in message
news:1EFE7BC9-C4A4-4902-84C3-0B79B78A58FA@.microsoft.com...
>I am creating temporary tables in a Stored Procedure, to create paging for
a
> web frontend.
> I do this by creating a temp table with a "identity" field, then copying a
ll
> relevant data into the temp table, and then in the end I select out the
> actual "page" from the total temp table, ie. from record ID 100 to 150.
> During this process, I was wondering if my temp table should have an index
,
> for optimal performance ?
> If my temp table has ie. 5000 records, and I want to select and return onl
y
> records from 3500 to 3550, I select with a "where clause" specifying only
> records from 3500 to 3550, using the identity field, which automatically
> works as a "record counter" for my totalt recordset.
> But should I create an index on the identity field, before filling the tem
p
> table with records, and then selecting the actual page to return ?
> I mean, if I query using a where clause specifying an column without an
> index, wouldnt this create table scans ?
>
> -
> Regards,
> Tony G.

Tuesday, March 20, 2012

creating tables

Wondering if I while I create a table I can choose certain values a field can have?
Ex.
CREATE TABLE test
(
Subject_code char(20) NOT NULL,
Student_id char(20) NOT NULL,
Result GOOD or BAD or SUPERGOOD (this is just an example)
)

If a field can have 2 values I can use bit (boolean). But is there a way to do this with several values?Normally you would use a check constraint. Like this:
CREATE TABLE dbo.ExampleTable (
EntryID int NOT NULL IDENTITY(1,1),
Result varchar(32) NOT NULL
CONSTRAINT ExampleTable_Result_CK CHECK (
Result IN ('GOOD', 'BAD', 'SUPERGOOD'))
)

INSERT INTO ExampleTable (Result) VALUES ('GOOD')
INSERT INTO ExampleTable (Result) VALUES ('BAD')
INSERT INTO ExampleTable (Result) VALUES ('UGLY')

SELECT * FROM ExampleTable
Originally posted by buggirl
Wondering if I while I create a table I can choose certain values a field can have?
Ex.
CREATE TABLE test
(
Subject_code char(20) NOT NULL,
Student_id char(20) NOT NULL,
Result GOOD or BAD or SUPERGOOD (this is just an example)
)

If a field can have 2 values I can use bit (boolean). But is there a way to do this with several values?sql

Creating SUBTOTALS on a Matrix report

Hello
I have a matrix report and I'm trying to create a subtotal at the end of both the colum and the row field, so something like:
Jan Feb April Total
----
Beer =A31.00 =A32.00 =A32.00 =A35.00
Wine =A35.00 =A35.00 =A35.00 =A35.00
Spirits =A35.00 =A35.00 =A310.00 =A320.00
TOTAL =A311.00 =A312.00 =A317.00 =A370.00
So I need the two totals on the row and column sections, then i need a final total of both row and column totals in the bottom right hand cell. (exactly how a pivot report works in excel).
I have tried right clicking the matrix report and selecting Subtotal but this just gives me an empty cell when I preview the report. BOL says it's supposed to total up the values but it isn't doing this. Am I missing something? Any help would be very much appreciated!
Thanks in advance
janoIn the matrix cells you should always use aggregate functions (e.g.
=Sum(Fields!Sales.Value)). The same aggregate function will be applied to
the subtotals and the sums should automatically show up.
Did you look into the sample reports that come with the RS installation?
Some of them contain matrix examples - just compare them with your report.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Jano" <janobermudes@.microsoft.com> wrote in message
news:28e5c01c464ea$9c46d870$a501280a@.phx.gbl...
Hello
I have a matrix report and I'm trying to create a subtotal
at the end of both the colum and the row field, so
something like:
Jan Feb April Total
----
Beer £1.00 £2.00 £2.00 £5.00
Wine £5.00 £5.00 £5.00 £5.00
Spirits £5.00 £5.00 £10.00 £20.00
TOTAL £11.00 £12.00 £17.00 £70.00
So I need the two totals on the row and column sections,
then i need a final total of both row and column totals in
the bottom right hand cell. (exactly how a pivot report
works in excel).
I have tried right clicking the matrix report and
selecting Subtotal but this just gives me an empty cell
when I preview the report. BOL says it's supposed to total
up the values but it isn't doing this. Am I missing
something? Any help would be very much appreciated!
Thanks in advance
jano

Saturday, February 25, 2012

Creating new field while app running

Hi
Is it possible to add a field to an sql server 20005 table programmatically
by a vb.net app while the vb.net app has the table being viewed on a vb.net
form?
Thanks
Regards
you can, if you use dataset for example, then your program retrieve the data
then close the connection. so there is no lock when you'll add a field.
also, adding a field is a transaction like any other transaction in the
database, so when the table become available, the field will be added
"John" <John@.nospam.infovis.co.uk> wrote in message
news:ulQbPTHoFHA.1088@.TK2MSFTNGP14.phx.gbl...
> Hi
> Is it possible to add a field to an sql server 20005 table
> programmatically by a vb.net app while the vb.net app has the table being
> viewed on a vb.net form?
> Thanks
> Regards
>
|||John,
You can, however it is probably not a good idea.
You can do that with SQL "Alter" commands and use the Adonet command
(SQLClient.SQLCommand) ExecuteNonQuery
You have crossposted to more newsgroups, did you know that the real
newsgroup for your question is
microsoft.public.dotnet.framework.adonet
I hope this helps,
Cor
|||or maybe
microsoft.public.sqlserver.programming ? ( as there are more ways to
perform this as with ADO.Net )
ofcourse it is always the est in a managed situation to let the DBA perform
these tasks , as changing columns might also add the need of performance
analyzing of the database and take apropriate actions that might affect the
database in a possitive way ( adding deleting indexes , constrains etc etc )
if your program is also responsible for the managing task ( like a user
level deployed MSDE database ) it is a good idea in my opinion to do the
following
perform the wanted tasks on a sql server ,with enterprise manager , optimize
the database, now export all your changes as SQL files
you can now import these files in your deployed sql instances either with a
shell call to the command line tools or just read them in and execute the
statements on the connection object ( ofcourse you need to log in with SA ,
administrator rights to do that , or with a pre defined user that was
granted the apropriate rights on the database to perform these actions )
regards
Michel Posseth
"Cor Ligthert [MVP]" <notmyfirstname@.planet.nl> wrote in message
news:enuiN7JoFHA.2904@.TK2MSFTNGP14.phx.gbl...
> John,
> You can, however it is probably not a good idea.
> You can do that with SQL "Alter" commands and use the Adonet command
> (SQLClient.SQLCommand) ExecuteNonQuery
> You have crossposted to more newsgroups, did you know that the real
> newsgroup for your question is
> microsoft.public.dotnet.framework.adonet
> I hope this helps,
> Cor
>
|||John (John@.nospam.infovis.co.uk) writes:
> Is it possible to add a field to an sql server 20005 table
> programmatically by a vb.net app while the vb.net app has the table
> being viewed on a vb.net form?
Well, depends on what you mean with possible. But since ADO .Net is
disconnected, there are no locks on the table preventing columns to
be added by you or anyone else.
What you will happen to your VB app next time it access the table
I don't know. Since, a table is intended to be a static entity which
changes only on application upgrades, it's not that your dataset is
going to change automatically. There may be methods to add columns,
though.
Anyway, unless this is a temp table you are working with, you are
probably off-track here. Describe you real business problem, and you
may get better approaches to solve that problem.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Creating new field while app running

Hi
Is it possible to add a field to an sql server 20005 table programmatically
by a vb.net app while the vb.net app has the table being viewed on a vb.net
form?
Thanks
Regardsyou can, if you use dataset for example, then your program retrieve the data
then close the connection. so there is no lock when you'll add a field.
also, adding a field is a transaction like any other transaction in the
database, so when the table become available, the field will be added
"John" <John@.nospam.infovis.co.uk> wrote in message
news:ulQbPTHoFHA.1088@.TK2MSFTNGP14.phx.gbl...
> Hi
> Is it possible to add a field to an sql server 20005 table
> programmatically by a vb.net app while the vb.net app has the table being
> viewed on a vb.net form?
> Thanks
> Regards
>|||John,
You can, however it is probably not a good idea.
You can do that with SQL "Alter" commands and use the Adonet command
(SQLClient.SQLCommand) ExecuteNonQuery
You have crossposted to more newsgroups, did you know that the real
newsgroup for your question is
microsoft.public.dotnet.framework.adonet
I hope this helps,
Cor|||or maybe
microsoft.public.sqlserver.programming ' ( as there are more ways to
perform this as with ADO.Net )
ofcourse it is always the est in a managed situation to let the DBA perform
these tasks , as changing columns might also add the need of performance
analyzing of the database and take apropriate actions that might affect the
database in a possitive way ( adding deleting indexes , constrains etc etc )
if your program is also responsible for the managing task ( like a user
level deployed MSDE database ) it is a good idea in my opinion to do the
following
perform the wanted tasks on a sql server ,with enterprise manager , optimize
the database, now export all your changes as SQL files
you can now import these files in your deployed sql instances either with a
shell call to the command line tools or just read them in and execute the
statements on the connection object ( ofcourse you need to log in with SA ,
administrator rights to do that , or with a pre defined user that was
granted the apropriate rights on the database to perform these actions )
regards
Michel Posseth
"Cor Ligthert [MVP]" <notmyfirstname@.planet.nl> wrote in message
news:enuiN7JoFHA.2904@.TK2MSFTNGP14.phx.gbl...
> John,
> You can, however it is probably not a good idea.
> You can do that with SQL "Alter" commands and use the Adonet command
> (SQLClient.SQLCommand) ExecuteNonQuery
> You have crossposted to more newsgroups, did you know that the real
> newsgroup for your question is
> microsoft.public.dotnet.framework.adonet
> I hope this helps,
> Cor
>|||John (John@.nospam.infovis.co.uk) writes:
> Is it possible to add a field to an sql server 20005 table
> programmatically by a vb.net app while the vb.net app has the table
> being viewed on a vb.net form?
Well, depends on what you mean with possible. But since ADO .Net is
disconnected, there are no locks on the table preventing columns to
be added by you or anyone else.
What you will happen to your VB app next time it access the table
I don't know. Since, a table is intended to be a static entity which
changes only on application upgrades, it's not that your dataset is
going to change automatically. There may be methods to add columns,
though.
Anyway, unless this is a temp table you are working with, you are
probably off-track here. Describe you real business problem, and you
may get better approaches to solve that problem.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinfo/productdoc/2000/books.asp

Creating new field while app running

Hi
Is it possible to add a field to an sql server 20005 table programmatically
by a vb.net app while the vb.net app has the table being viewed on a vb.net
form?
Thanks
Regardsyou can, if you use dataset for example, then your program retrieve the data
then close the connection. so there is no lock when you'll add a field.
also, adding a field is a transaction like any other transaction in the
database, so when the table become available, the field will be added
"John" <John@.nospam.infovis.co.uk> wrote in message
news:ulQbPTHoFHA.1088@.TK2MSFTNGP14.phx.gbl...
> Hi
> Is it possible to add a field to an sql server 20005 table
> programmatically by a vb.net app while the vb.net app has the table being
> viewed on a vb.net form?
> Thanks
> Regards
>|||John,
You can, however it is probably not a good idea.
You can do that with SQL "Alter" commands and use the Adonet command
(SQLClient.SQLCommand) ExecuteNonQuery
You have crossposted to more newsgroups, did you know that the real
newsgroup for your question is
microsoft.public.dotnet.framework.adonet
I hope this helps,
Cor|||or maybe
microsoft.public.sqlserver.programming ' ( as there are more ways to
perform this as with ADO.Net )
ofcourse it is always the est in a managed situation to let the DBA perform
these tasks , as changing columns might also add the need of performance
analyzing of the database and take apropriate actions that might affect the
database in a possitive way ( adding deleting indexes , constrains etc etc )
if your program is also responsible for the managing task ( like a user
level deployed MSDE database ) it is a good idea in my opinion to do the
following
perform the wanted tasks on a sql server ,with enterprise manager , optimize
the database, now export all your changes as SQL files
you can now import these files in your deployed sql instances either with a
shell call to the command line tools or just read them in and execute the
statements on the connection object ( ofcourse you need to log in with SA ,
administrator rights to do that , or with a pre defined user that was
granted the apropriate rights on the database to perform these actions )
regards
Michel Posseth
"Cor Ligthert [MVP]" <notmyfirstname@.planet.nl> wrote in message
news:enuiN7JoFHA.2904@.TK2MSFTNGP14.phx.gbl...
> John,
> You can, however it is probably not a good idea.
> You can do that with SQL "Alter" commands and use the Adonet command
> (SQLClient.SQLCommand) ExecuteNonQuery
> You have crossposted to more newsgroups, did you know that the real
> newsgroup for your question is
> microsoft.public.dotnet.framework.adonet
> I hope this helps,
> Cor
>|||John (John@.nospam.infovis.co.uk) writes:
> Is it possible to add a field to an sql server 20005 table
> programmatically by a vb.net app while the vb.net app has the table
> being viewed on a vb.net form?
Well, depends on what you mean with possible. But since ADO .Net is
disconnected, there are no locks on the table preventing columns to
be added by you or anyone else.
What you will happen to your VB app next time it access the table
I don't know. Since, a table is intended to be a static entity which
changes only on application upgrades, it's not that your dataset is
going to change automatically. There may be methods to add columns,
though.
Anyway, unless this is a temp table you are working with, you are
probably off-track here. Describe you real business problem, and you
may get better approaches to solve that problem.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Creating Nested Nodes

I am trying to write a stored procedure that will accept an XPATH to an element or an attribute, that is held in an SQL XML field, and then either updates or creates it setting its value to a @.value, also passed in.

Issue is the end point or the path to the end point may not entirely exist

For instance if i passed the following to the stored procedure, i would expect the node to be updated with the value.

xpath : /root/doc[@.ID=1]/Resource/Media[@.ID]

Value to be set : 10

I would assume that in order to set or update the node, that it would have to exist first. ?

Also, is it possible to create one XQuery that will check the path sent and then update or create it. ?

Thanks

Currently we do not allow variable parameters to be passed to the xml datatype .exist, .query() and .modify() methods. That means in order for you xpath to be parametric, you will have to construct dynamic queries. You can either do this on the client side or in a stored procedure that then calls sp_executesql. However, this opens you up to security risks (sql injection), so is not an ideal solution.

Creating multiple rows from a field with a list

OK. I'm being sent some data with a varchar field called my_elements that contains a 'list' of data items e.g. "4950,1,0%4954,2,0%4955,1,1"
I want to separate this into three records with a my_element field:
"4950,1,0"
"4954,2,0"
"4955,1,1"
How can I do this in SQL? Here's a template for what I want:

create table my_table
(
my_id int,
my_elements varchar(8000)
)
insert into my_table values (1,'4950,1,0%4954,2,0%4955,1,1')

-- Now I need some SQL to produce to create rows like these:
-- 1,'4950,1,0'
-- 1,'4954,2,0'
-- 1,'4955,1,1'
--
-- Or even better, as 4 numeric fields per row:
-- 1,4950,1,0
-- 1,4954,2,0
-- 1,4955,1,1

Also, I'd like to aviod using cursors if I can.

Any help appreciated. Thanks!do you have a fixed or variable number of data items?|||OK So this example Sucks But Hey

It Works for your example

SELECT my_id,SUBSTRING(my_elements,0,9) FROM my_table
UNION
SELECT my_id,SUBSTRING(my_elements,10,8) FROM my_table
UNION
SELECT my_id,SUBSTRING(my_elements,19,9) FROM my_table

tehe

GW|||Originally posted by Paul Young
do you have a fixed or variable number of data items?

Yeah, sorry, I forgot to point out, a variable number of data items|||mmmm

wonder if we could work out the entire length of the field ie. total number of seperate records

Then

Iterate through a loop using local variables to build a dynamic SQL Statement incrementing the substring position as we go and adding the UNIONS then execute that.

It's a thought

Is the physical length of the Data Items Consistent ?|||or:

if object_id('TEMPDB..#my_table') is not null drop table #my_table

create table #my_table (
my_id int
, my_elements varchar(8000))

insert into #my_table values (1,'4950,1,0%4954,2,0%4955,1,1')
insert into #my_table values (2,'4850,1,0%4854,2,0%4855,1,1')
insert into #my_table values (3,'4750,1,0%4754,2,0%4755,1,1')
insert into #my_table values (4,'4650,1,0%4654,2,0%4655,1,1')
insert into #my_table values (5,'4550,1,0%4554,2,0%4555,1,1')

declare @.Tbl table(my_id int, my_element1 int, my_element2 int, my_element3 int)
declare @.my_id int, @.my_elements varchar(8000)
, @.RecordSeperator char(1), @.ItemSeperator char(1)
, @.my_element varchar(12)
, @.RecordPosition int, @.ItemPosition int, @.LastRecordPosition int, @.LastItemPosition int
, @.Int1 int, @.Int2 int, @.Int3 int, @.Int4 int

select @.RecordSeperator = '%'
, @.ItemSeperator = ','

select @.my_id = min(my_id) from #my_table
while (@.my_id is not null) begin
select @.my_elements = my_elements from #my_table where my_id = @.my_id

set @.LastRecordPosition = 1
set @.RecordPosition = charindex(@.RecordSeperator, @.my_elements, @.LastRecordPosition)
while (@.RecordPosition > 0) begin
set @.my_element = substring(@.my_elements,@.LastRecordPosition,@.Record Position-@.LastRecordPosition)

set @.LastItemPosition = 1
set @.ItemPosition = charindex(@.ItemSeperator, @.my_element, @.LastItemPosition)
set @.Int1 = cast(substring(@.my_element,@.LastItemPosition,@.Item Position-@.LastItemPosition) as int)

set @.LastItemPosition = @.ItemPosition + 1
set @.ItemPosition = charindex(@.ItemSeperator, @.my_element, @.LastItemPosition)
set @.Int2 = cast(substring(@.my_element,@.LastItemPosition,@.Item Position-@.LastItemPosition) as int)

set @.LastItemPosition = @.ItemPosition + 1
set @.ItemPosition = charindex(@.ItemSeperator, @.my_element, @.LastItemPosition)
set @.Int3 = cast(substring(@.my_element,@.LastItemPosition,len(@. my_element)) as int)

raiserror('ID: %d ''%s'' %d - %d - %d.',0,1,@.my_id,@.my_element,@.Int1,@.Int2,@.Int3)

insert into @.Tbl values(@.my_id, @.Int1, @.Int2, @.Int3)

set @.LastRecordPosition = @.RecordPosition + 1
set @.RecordPosition = charindex(@.RecordSeperator, @.my_elements, @.LastRecordPosition)

end

select @.my_id = min(my_id) from #my_table where my_id > @.my_id
end

select * from @.Tbl|||Wow! Looks good! Thank you Paul.

Sunday, February 19, 2012

creating index on calculated field in table -sql 2000

Hi all,
we have a table with these columns,
col1 (int)
col2(bigint)
col3 ((convert(varchar(10),[col1]) + '_' + convert(varchar(20),[col2
]))
...
..
colN
Where col3 is a unique key for our transaction system..
Question :
(1) what will be a performance implications if we query on this 'col3'
field ( where underlying fields are already indexed )
select * from table1 where col3 in (select id from table2 where f1 =
something and f2 = somethingelse)
I tried to see the execution plan but didn't make much sense..I also
tried above query where table1 has 100,000 records.. but didn't see any
impact..
Buy in production, we have seen this type of query taking 3 times more
to return results.
(2) To eliminate above problem, i thought of having index on col3, but
couldn't find any information on 'cost of creating index on caluculated
field'
does any one know how expensive it is to create an index on col3 ' How
does sql server manage the indexes on calculated field ? What happens
when a new row is inserted ?
any help is appreciated.
thanks.You can create an index on a computed column in SQL 2000. If you're
querying on it often, then it's worth the hit to build it.
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"jaynika" <jaynika@.gmail.com> wrote in message
news:1142892662.549052.163100@.i39g2000cwa.googlegroups.com...
Hi all,
we have a table with these columns,
col1 (int)
col2(bigint)
col3 ((convert(varchar(10),[col1]) + '_' + convert(varchar(20),[col2
]))
...
..
colN
Where col3 is a unique key for our transaction system..
Question :
(1) what will be a performance implications if we query on this 'col3'
field ( where underlying fields are already indexed )
select * from table1 where col3 in (select id from table2 where f1 =
something and f2 = somethingelse)
I tried to see the execution plan but didn't make much sense..I also
tried above query where table1 has 100,000 records.. but didn't see any
impact..
Buy in production, we have seen this type of query taking 3 times more
to return results.
(2) To eliminate above problem, i thought of having index on col3, but
couldn't find any information on 'cost of creating index on caluculated
field'
does any one know how expensive it is to create an index on col3 ' How
does sql server manage the indexes on calculated field ? What happens
when a new row is inserted ?
any help is appreciated.
thanks.|||Thanks TOM,
but what happens when a new row is inserted in the table. does SQL
server rebuilds the index for computed cloumn ?
thx again!!!|||no, it wouldnt' rebuild the entire index.
The only difference between calc'd fields and regular fields is the cpu
cycles to calculate the expression.
If it were me, I would be VERY tempted to have a trigger the calculated
and evaluated col3 on insert and update based upon col1 adn col2.
then, just store the data in col3 as your finalized form.
Then, just have an index on col3, without the calculations. I base
that upon the precept that you are going to be using col3 for a bunch
of other things, and you might as well just have the data the way you
are going to need it.
regards,
doug

creating index on calculated field in table -sql 2000

Hi all,
we have a table with these columns,
col1 (int)
col2(bigint)
col3 ((convert(varchar(10),[col1]) + '_' + convert(varchar(20),[col2]))
...
...
colN
Where col3 is a unique key for our transaction system..
Question :
(1) what will be a performance implications if we query on this 'col3'
field ( where underlying fields are already indexed )
select * from table1 where col3 in (select id from table2 where f1 =
something and f2 = somethingelse)
I tried to see the execution plan but didn't make much sense..I also
tried above query where table1 has 100,000 records.. but didn't see any
impact..
Buy in production, we have seen this type of query taking 3 times more
to return results.
(2) To eliminate above problem, i thought of having index on col3, but
couldn't find any information on 'cost of creating index on caluculated
field'
does any one know how expensive it is to create an index on col3 ? How
does sql server manage the indexes on calculated field ? What happens
when a new row is inserted ?
any help is appreciated.
thanks.
You can create an index on a computed column in SQL 2000. If you're
querying on it often, then it's worth the hit to build it.
Tom
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
..
"jaynika" <jaynika@.gmail.com> wrote in message
news:1142892662.549052.163100@.i39g2000cwa.googlegr oups.com...
Hi all,
we have a table with these columns,
col1 (int)
col2(bigint)
col3 ((convert(varchar(10),[col1]) + '_' + convert(varchar(20),[col2]))
...
...
colN
Where col3 is a unique key for our transaction system..
Question :
(1) what will be a performance implications if we query on this 'col3'
field ( where underlying fields are already indexed )
select * from table1 where col3 in (select id from table2 where f1 =
something and f2 = somethingelse)
I tried to see the execution plan but didn't make much sense..I also
tried above query where table1 has 100,000 records.. but didn't see any
impact..
Buy in production, we have seen this type of query taking 3 times more
to return results.
(2) To eliminate above problem, i thought of having index on col3, but
couldn't find any information on 'cost of creating index on caluculated
field'
does any one know how expensive it is to create an index on col3 ? How
does sql server manage the indexes on calculated field ? What happens
when a new row is inserted ?
any help is appreciated.
thanks.
|||Thanks TOM,
but what happens when a new row is inserted in the table. does SQL
server rebuilds the index for computed cloumn ?
thx again!!!
|||no, it wouldnt' rebuild the entire index.
The only difference between calc'd fields and regular fields is the cpu
cycles to calculate the expression.
If it were me, I would be VERY tempted to have a trigger the calculated
and evaluated col3 on insert and update based upon col1 adn col2.
then, just store the data in col3 as your finalized form.
Then, just have an index on col3, without the calculations. I base
that upon the precept that you are going to be using col3 for a bunch
of other things, and you might as well just have the data the way you
are going to need it.
regards,
doug

creating index on calculated field in table -sql 2000

Hi all,
we have a table with these columns,
col1 (int)
col2(bigint)
col3 ((convert(varchar(10),[col1]) + '_' + convert(varchar(20),[col2]))
...
..
colN
Where col3 is a unique key for our transaction system..
Question :
(1) what will be a performance implications if we query on this 'col3'
field ( where underlying fields are already indexed )
select * from table1 where col3 in (select id from table2 where f1 = something and f2 = somethingelse)
I tried to see the execution plan but didn't make much sense..I also
tried above query where table1 has 100,000 records.. but didn't see any
impact..
Buy in production, we have seen this type of query taking 3 times more
to return results.
(2) To eliminate above problem, i thought of having index on col3, but
couldn't find any information on 'cost of creating index on caluculated
field'
does any one know how expensive it is to create an index on col3 ' How
does sql server manage the indexes on calculated field ? What happens
when a new row is inserted ?
any help is appreciated.
thanks.You can create an index on a computed column in SQL 2000. If you're
querying on it often, then it's worth the hit to build it.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinpub.com
.
"jaynika" <jaynika@.gmail.com> wrote in message
news:1142892662.549052.163100@.i39g2000cwa.googlegroups.com...
Hi all,
we have a table with these columns,
col1 (int)
col2(bigint)
col3 ((convert(varchar(10),[col1]) + '_' + convert(varchar(20),[col2]))
...
..
colN
Where col3 is a unique key for our transaction system..
Question :
(1) what will be a performance implications if we query on this 'col3'
field ( where underlying fields are already indexed )
select * from table1 where col3 in (select id from table2 where f1 =something and f2 = somethingelse)
I tried to see the execution plan but didn't make much sense..I also
tried above query where table1 has 100,000 records.. but didn't see any
impact..
Buy in production, we have seen this type of query taking 3 times more
to return results.
(2) To eliminate above problem, i thought of having index on col3, but
couldn't find any information on 'cost of creating index on caluculated
field'
does any one know how expensive it is to create an index on col3 ' How
does sql server manage the indexes on calculated field ? What happens
when a new row is inserted ?
any help is appreciated.
thanks.|||Thanks TOM,
but what happens when a new row is inserted in the table. does SQL
server rebuilds the index for computed cloumn ?
thx again!!!|||no, it wouldnt' rebuild the entire index.
The only difference between calc'd fields and regular fields is the cpu
cycles to calculate the expression.
If it were me, I would be VERY tempted to have a trigger the calculated
and evaluated col3 on insert and update based upon col1 adn col2.
then, just store the data in col3 as your finalized form.
Then, just have an index on col3, without the calculations. I base
that upon the precept that you are going to be using col3 for a bunch
of other things, and you might as well just have the data the way you
are going to need it.
regards,
doug

Creating Identity Field

Below is a simple CREATE TABLE statement in my DTS job that drops myTable,
recreates it and then transfers data from myTable in database 1 to the
myTable in destination database. Given the fact that myTable contains data
and that the causeID field may not begin with a 1 value, what parameters
should I use so SQL will not change or start with causeID = 1?
I just want my data to import and retain whatever identity values it has. Do
I have to specify the (1,1) after IDENTITY?
CODE:
CREATE TABLE [myDatabase].[dbo].[myTable] (
[causeID] int IDENTITY(1,1) NOT NULL,
[causeCode] varchar (3) NULL,
[causeName] varchar (50) NULL
)Scott wrote:

> I just want my data to import and retain whatever identity values it
> has. Do I have to specify the (1,1) after IDENTITY?
> CODE:
>
> CREATE TABLE [myDatabase].[dbo].[myTable] (
> [causeID] int IDENTITY(1,1) NOT NULL,
> [causeCode] varchar (3) NULL,
> [causeName] varchar (50) NULL
> )
You can use
Set IDENTITY_INSERT [myTable] ON
then insert the data and afterwards
Set IDENTITY_INSERT [myTable] OFF
New data which will be inserted later will get the correct identities.
HTH,
Stijn Verre^t.|||Why are you using IDENTITY at all? Why is "cause_code" not the proper,
relational key'|||causeID is the primary key, causeCode is a text field code that a user
enters.
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1133140303.738683.141210@.g43g2000cwa.googlegroups.com...
> Why are you using IDENTITY at all? Why is "cause_code" not the proper,
> relational key'
>|||Let me help out with Celko's angle here.
If I type causeCode = "foobar" and so does Celko, why do we have different
causeID values?
Now, let me say that I am not against using Identity (or a similar surrogate
key approach), but I do believe that causeCode should at least be unique /
non-repeating.
A
"Scott" <sbailey@.mileslumber.com> wrote in message
news:ex2v1t88FHA.3048@.TK2MSFTNGP10.phx.gbl...
> causeID is the primary key, causeCode is a text field code that a user
> enters.
> "--CELKO--" <jcelko212@.earthlink.net> wrote in message
> news:1133140303.738683.141210@.g43g2000cwa.googlegroups.com...
>|||Probably something like this...
CREATE TABLE [myDatabase].[dbo].[myTable] (
[causeID] int IDENTITY(1,1) NOT NULL constraint SK_myTable unique
clustered,
[causeCode] varchar (3) NULL constraint PK_myTable primary key
nonclustered,
[causeName] varchar (50) NULL
)
This then satisfies a natural key and surrogate key, you would then use
causeID as the foreign key in other tables...
CREATE TABLE myAudit (
causeID int not null references myTable( causeID )
...
)
rather than...
CREATE TABLE myAudit (
causeCode varchar(3) not null references myTable( causeCode )
)
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"Aaron Bertrand [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
news:eudsZJ98FHA.1184@.TK2MSFTNGP12.phx.gbl...
> Let me help out with Celko's angle here.
> If I type causeCode = "foobar" and so does Celko, why do we have different
> causeID values?
> Now, let me say that I am not against using Identity (or a similar
> surrogate key approach), but I do believe that causeCode should at least
> be unique / non-repeating.
> A
>
> "Scott" <sbailey@.mileslumber.com> wrote in message
> news:ex2v1t88FHA.3048@.TK2MSFTNGP10.phx.gbl...
>|||I think Scott simply has terminology mixed up which isn't a crime, some
people confuse a 'relational key' with a the standard Natural Key.
CREATE TABLE [myDatabase].[dbo].[myTable] (
[causeID] int IDENTITY(1,1) NOT NULL constraint SK_myTable unique
clustered,
[causeCode] varchar (3) NULL constraint PK_myTable primary key
nonclustered,
[causeName] varchar (50) NULL
)
This then satisfies a natural key and surrogate key, you would then use
causeID as the foreign key in other tables...
CREATE TABLE myAudit (
causeID int not null references myTable( causeID )
...
)
rather than...
CREATE TABLE myAudit (
causeCode varchar(3) not null references myTable( causeCode )
)
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"--CELKO--" <jcelko212@.earthlink.net> wrote in message
news:1133140303.738683.141210@.g43g2000cwa.googlegroups.com...
> Why are you using IDENTITY at all? Why is "cause_code" not the proper,
> relational key'
>

Tuesday, February 14, 2012

creating data for a histogram.

I have a table, TableA with amongst other fields, a field for Qty.
Qty can range from 0 to 100.
How do I count the number of rows with a qty between 1 and 10, 11 and
20, 21 and 30, and so on using one SQL statement?

Regards,
Ciarn(chudson007@.hotmail.com) writes:
> I have a table, TableA with amongst other fields, a field for Qty.
> Qty can range from 0 to 100.
> How do I count the number of rows with a qty between 1 and 10, 11 and
> 20, 21 and 30, and so on using one SQL statement?

SELECT qty10, COUNT(*)
FROM (SELECT qty10 = ((qty - 1) / 10) * 10 + 1
FROM tbl) AS ds
GROUP BY qty10

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||That seems to just count the number of times each qty appears, just
like

SELECT qty AS Expr1, COUNT(*) AS Expr2
FROM [Temp]
GROUP BY qty

How should I change it so that it counts the the number of qtys between
each range of 10?

Regards,
Ciarn|||Hi Erland Sommarskog ,

You Always give helpfull and informative answers.
I changed the query a bit to show the LowRange as well HiRange
SELECT LowRange,HiRange,COUNT(*)
FROM (SELECT lowRange = ((qty - 1) / 10) * 10 + 1
,HiRange=((qty - 1) / 10) * 10 + 10
FROM sales) AS ds
GROUP BY lowRange ,HiRange
but I am facing a problem can You guide me on this

This query {select q=qty+10 from sales order by q} works but
{select q=qty+10 from sales group by q} does not work .SQL Server2000
is not recognising Aliased Columns in second case .
---
With regards
Jatinder Singh (System Analyst )|||(chudson007@.hotmail.com) writes:
> That seems to just count the number of times each qty appears, just
> like
> SELECT qty AS Expr1, COUNT(*) AS Expr2
> FROM [Temp]
> GROUP BY qty
>
> How should I change it so that it counts the the number of qtys between
> each range of 10?

The query I posted was:

SELECT qty10, COUNT(*)
FROM (SELECT qty10 = ((qty - 1) / 10) * 10 + 1
FROM tbl) AS ds
GROUP BY qty10

I would expect to give the desired result, assuming that qty is integer.
If qty is float or decimal, it will indeed just be a roundabout way to
count single qtys.

I will have to admit that I did not test my query, but there is standard
recommendation that posting asking for help with queries should include:

o CREATE TABLE statement for your table(s).
o INSERT statement with sample data.
o The desired output given the sample data.

This makes it very easy for me or anyone else who anser to cut and paste
into Query Analyzer and test whatever we post.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||jsfromynr (jatinder.singh@.clovertechnologies.com) writes:
> You Always give helpfull and informative answers.
> I changed the query a bit to show the LowRange as well HiRange
> SELECT LowRange,HiRange,COUNT(*)
> FROM (SELECT lowRange = ((qty - 1) / 10) * 10 + 1
> ,HiRange=((qty - 1) / 10) * 10 + 10
> FROM sales) AS ds
> GROUP BY lowRange ,HiRange
> but I am facing a problem can You guide me on this
> This query {select q=qty+10 from sales order by q} works but
> {select q=qty+10 from sales group by q} does not work .SQL Server2000
> is not recognising Aliased Columns in second case .

Correct. I believe that Access does this, but that's not in alignment with
the SQL standards.

Instead, the technique to use is a derived table as a above.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thanks Erland,
I am facing another problem of displaying a summarised data along with
the detail data
i.e

Item Qty

Item1 10
Item1 10
Item1 20
40 ( Sum for Item1)
and so on .........

I wish to have a single query which runs on all RDBMS . Is it possible
?

USE pubs
SELECT type, price, advance
FROM titles
ORDER BY type
COMPUTE SUM(price), SUM(advance) BY type

This Query works but it would work on MS SQLServer .

---------------
With regards
Jatinder Singh (System Analyst )|||jsfromynr (jatinder.singh@.clovertechnologies.com) writes:
> I am facing another problem of displaying a summarised data along with
> the detail data
> i.e
> Item Qty
> Item1 10
> Item1 10
> Item1 20
> 40 ( Sum for Item1)
> and so on .........
> I wish to have a single query which runs on all RDBMS . Is it possible
> ?
> USE pubs
> SELECT type, price, advance
> FROM titles
> ORDER BY type
> COMPUTE SUM(price), SUM(advance) BY type
> This Query works but it would work on MS SQLServer .

Here is a query which I believe should be fairly portable. (But since
I only work with SQL Server, I can make no warranties):

SELECT type, x = '', price, advance
FROM titles
UNION
SELECT type, 'Total', SUM(price), SUM(advance)
FROM titles
GROUP BY type
ORDER BY type, x

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||
Erland Sommarskog wrote:
> jsfromynr (jatinder.singh@.clovertechnologies.com) writes:
> > I am facing another problem of displaying a summarised data along
with
> > the detail data
> > i.e
> > Item Qty
> > Item1 10
> > Item1 10
> > Item1 20
> > 40 ( Sum for Item1)
> > and so on .........
> > I wish to have a single query which runs on all RDBMS . Is it
possible
> > ?
> > USE pubs
> > SELECT type, price, advance
> > FROM titles
> > ORDER BY type
> > COMPUTE SUM(price), SUM(advance) BY type
> > This Query works but it would work on MS SQLServer .
> Here is a query which I believe should be fairly portable. (But since
> I only work with SQL Server, I can make no warranties):
> SELECT type, x = '', price, advance
> FROM titles
> UNION
> SELECT type, 'Total', SUM(price), SUM(advance)
> FROM titles
> GROUP BY type
> ORDER BY type, x
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp

Hi Erland,
Thanks ,I think it will work on any RDBMS . Your analysis ablity is
really something. I cannot describe it in words.

Thanks Again
With warm regards
Jatinder Singh (System Analyst)|||Hi Erland,

Can we similarly replace CUBE operator in SQL Server by using simple
queries that will run on any RDBMS?

With warm regards
Jatinder Singh (System Analyst)|||jsfromynr (jatinder.singh@.clovertechnologies.com) writes:
> Can we similarly replace CUBE operator in SQL Server by using simple
> queries that will run on any RDBMS?

I don't use CUBE very often, so I may miss some fine detail. But the
two queries below returns the same result:

SELECT type, pub_id, SUM(price), SUM(advance)
FROM titles
GROUP BY type, pub_id WITH CUBE
ORDER BY type, pub_id

SELECT type, pub_id, SUM(price), SUM(advance)
FROM titles
GROUP BY type, pub_id
UNION
SELECT type, NULL, SUM(price), SUM(advance)
FROM titles
GROUP BY type
UNION
SELECT NULL, pub_id, SUM(price), SUM(advance)
FROM titles
GROUP BY pub_id
UNION
SELECT NULL, NULL, SUM(price), SUM(advance)
FROM titles
ORDER BY type, pub_id

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Hi Erland
I am bit confused by the output produced by the Query Analyzer and
finding it bit difficult to decide which one of the following query is
faster.

In query one I am using Correlated subquery Approach and it consumes
78% of batch time when run with 2nd query but time of mere 20
micrseconds

In query two I am using functions (these functions berely takes
PolicyNumber and Endrosment No to give output and does the same query )
Approach and it consumes 22% of batch time when run with 1st query but
time of 400 micrseconds

Query 1:-
select RowId,PolicyNumber,EndoNumber,status,SF,case when SubQ is Null
then 'No' else 'Yes' end as Lock,RCount from
(
SELECT RowId,PolicyNumber,EndoNumber,status,SF,
(select status from InProcessData052005MstM WHERE status in ('pen') and
IP.PolicyNumber=PolicyNumber and IP.EndoNumber=EndoNumber) as subQ,
(select count(*) from InProcessData052005MstM WHERE status in
('pen','cur') and IP.PolicyNumber=PolicyNumber and
IP.EndoNumber=EndoNumber) as RCount
--,case when SubQ is Null then 'No' else 'Yes' end as Lock
FROM InProcessData052005MstM IP
WHERE status in ('cur','pen')
) X
ORDER BY PolicyNumber

select getdate()

SELECT
RowId,PolicyNumber,EndoNumber,status,SF,dbo.fnTryG etPolicyCount(PolicyNumber,EndoNumber)
as RCount,
dbo.fnTryGetPolicyLock(PolicyNumber,EndoNumber) as Lock
--(select status from InProcessData052005MstM WHERE status in ('pen')
and IP.PolicyNumber=PolicyNumber and IP.EndoNumber=EndoNumber) as subQ,
--(select count(*) from InProcessData052005MstM WHERE status in
('pen','cur') and IP.PolicyNumber=PolicyNumber and
IP.EndoNumber=EndoNumber) as RCount
--,case when SubQ is Null then 'No' else 'Yes' end as Lock
FROM InProcessData052005MstM
WHERE status in ('cur','pen')
order by PolicyNumber
select getdate()

create function fnTryGetPolicyCount(@.p varchar(16),@.e varchar(3))
returns int
as
begin
return (select count(*) from InProcessData052005MstM WHERE status in
('pen','cur') and PolicyNumber=@.p and EndoNumber=@.e)
end

create function fnTryGetPolicyLock(@.p varchar(16),@.e varchar(3))
returns varchar(3)
as
begin
declare @.Lock varchar(3)
select @.Lock=status from InProcessData052005MstM WHERE status in
('pen') and PolicyNumber=@.p and EndoNumber=@.E
if @.Lock is null
set @.Lock='No'
else
set @.Lock='Yes'

return (@.Lock)
--(select status from InProcessData052005MstM WHERE status in ('pen')
and PolicyNumber=@.p and EndoNumber=@.E)
end

------------------
Jatinder|||jsfromynr (jatinder.singh@.clovertechnologies.com) writes:
> I am bit confused by the output produced by the Query Analyzer and
> finding it bit difficult to decide which one of the following query is
> faster.
> In query one I am using Correlated subquery Approach and it consumes
> 78% of batch time when run with 2nd query but time of mere 20
> micrseconds
>
> In query two I am using functions (these functions berely takes
> PolicyNumber and Endrosment No to give output and does the same query )
> Approach and it consumes 22% of batch time when run with 1st query but
> time of 400 micrseconds

The difference in estimate may be because the function is not considered.

Anyway, the one way to benchmark queries is this:

DECLARE @.d datetime, @.tookms int
SELECT @.d = getdate()
-- run query here
SELECT @.tookms = datediff(ms, @.d, getdate())
PRINT 'This query took ' + ltrim(str(@.tookms) + ' ms to run.'

You need to consider the effect of the cache. If the two queries operates
on the same data, the easiest may be to run the queries several times
and discard the first result. You can also run DBCC DROPCLEANBUFFERS to
clean the cache, but that affects the entire server.

Also, beware that datetime has a resolution of 3.33 ms. For the
measurement method above, I have never seen any value between 0 and
13 ms. I consider values below 50 ms to be too inaccurate to be
taken as a significant. 400 ms is certainly significant.

Note: above you talk "microseconds". I assume this is a typo for
"milliseconds".

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Hi Erland,

Thanks for your answer and time.
Sorry for the typo error.
I can be wrong in my assumption buut isn't it that the two queries are
working in simliar fashion. Both are taking a value(or two) passing it
to inner corelated query (funtion) and getting the result.

Waiting for your reply.

Jatinder|||jsfromynr (jatinder.singh@.clovertechnologies.com) writes:
> Thanks for your answer and time.
> Sorry for the typo error.
> I can be wrong in my assumption buut isn't it that the two queries are
> working in simliar fashion. Both are taking a value(or two) passing it
> to inner corelated query (funtion) and getting the result.

Just because two queries logically are the same, that does not mean that
performance is. There is quite some overhead with calls to saclar user-
defined functions. Also, when you stuff a subquery into a scalar function,
all the optimizer sees is a call, it does not see the contents of rhe
UDF, so it cannot take any shortcuts.

Table-valued functions are different. Particularly inline functions. Table-
valued inline functions are really just macros, and the query text is
pasted into the query, so the optimizer can rearrange as it likes.

As for the estimates you saw in Query Analyzer, they are just estimates, and
I would not pay too much attention on them.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Hi Erland,

Thanks Again for your time .

Explaination is good. So may I consider that the UDF will always be
little slower because the Query Optimizer can never arrange it for
optimization. but using function make query more manageable

Please correct me if my assumption is wrong.

I have yet another question (query ) .
I have two tables
One empmast which store emp current designation
Other promotion table which store the promotions of an employee during
his service.It stores the information of employee designation promotion
date.

Empmast(empid int primary key,desigid int references desigmast
,.........)
PromotionDtls(empid int references Empmast,promotatedTo int references
desigmast, promotedFrom int references Desigmast,DateOfPromotion
smalldatetime)

EmpMast
empid desigid (current designation of employee)
1 3 ........................
2 1 ..................

PromotionDtls
empid promotedTo PromotedFrom effectiveDate
1 2 1 1-jan-2003
1 3 2 2-dec-2003
..........

Now I wish to use the designation Id in a query
such that if the employee data exists in Promotion Table the promotedTo
should be picked according to Effectivedate
otherwise the Empmast designation
e.g If I say desigId of employee having empid 1 on date 2-jun-2003 then
it should be desigId 2
I did this using isnull but I wish to find a better method.

select isnull( ( select top 1 promotedTo from promotionDtls where
empid=1 and effectivedate<'anygivendate' order by effectivedate desc )
, (select desigid from empmast where empid=1) )

It did give the result but looking for better method to solve this.

With regards
Jatinder Singh|||jsfromynr (jatinder.singh@.clovertechnologies.com) writes:
> I have yet another question (query ) .

Sorry for not coming back to you earlier, but I had limited time for
some days to read the posts in the newsgroups, so I deferred the
difficult stuff until later.

A general advice is that it's better to post a new problem to a new
thread. Then other people might be more keen to answer it.

> I have two tables
> One empmast which store emp current designation
> Other promotion table which store the promotions of an employee during
> his service.It stores the information of employee designation promotion
> date.
> ...

I've now looked at the problem again, but I still could not really
understand what you are looking for. Since I don't like guessing, I
answer with the standard suggestion that you include:

o CREATE TABLE statements for your tables.
o INSERT statements with sample data.
o The desired result given the sample.

The first two makes it simple to copy and paste into Query Analyzer,
and the last makes it possible to actually produce a tested query, and
also helps to clarify what you are looking for.

It's not only that I'm lazy - neither do I like guessing.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp