Thursday, March 22, 2012

Creating Tables on the Fly

I have an Access 2000 MDB with ODBC linked tables to a SQL Server 7 back
end. I currently have a selections table in the front end file which the
users use to make selections of records. The table has two fields -- primary
key (which matches primary key of main, SQL Server table), and a boolean
field. The table is linked to the main table in a heterogeneous inner join.
I'm looking to move the table to the back end, while still giving each
machine a unique set of selections. Using one large table with machine name
as part of the primary key actually slows things down. So I'm considering
using a series of tables, where each machine has its own table in the back
end for selections. The machine name would be incorporated in the particular
selections table name, and the front end link would be modified on the fly
when the database is opened to point to that machine's back end selections
table.
This would require having about 50-100 individual selections tables in the
back end database. Also, if a machine doesn't have a table when the database
is opened on that machine, then that table would be created on the fly,
populated, and pointed to via the ODBC link.
Anyone see any problems with this approach, specifically creating the table
on the fly and then immediately using it, as well as having that many little
tables running around? Thanks for any input!
Neil
> machine a unique set of selections. Using one large table with machine
> name as part of the primary key actually slows things down. So I'm
> considering
> ...
> This would require having about 50-100 individual selections tables in the
> back end database.
You have 50-100 rows in the table and you think that slows it down? Even
without a clustered index or any index at all, I find it hard to believe
that you can perceive any slowness whatsoever based on a scan of rows in the
three figure range. And to sacrifice manageability for that seems absurd,
at least to me.
|||Neil (nospam@.nospam.net) writes:
> I have an Access 2000 MDB with ODBC linked tables to a SQL Server 7 back
> end. I currently have a selections table in the front end file which the
> users use to make selections of records. The table has two fields --
> primary key (which matches primary key of main, SQL Server table), and a
> boolean field. The table is linked to the main table in a heterogeneous
> inner join.
> I'm looking to move the table to the back end, while still giving each
> machine a unique set of selections. Using one large table with machine
> name as part of the primary key actually slows things down. So I'm
> considering using a series of tables, where each machine has its own
> table in the back end for selections. The machine name would be
> incorporated in the particular selections table name, and the front end
> link would be modified on the fly when the database is opened to point
> to that machine's back end selections table.
>...
> Anyone see any problems with this approach, specifically creating the
> table on the fly and then immediately using it, as well as having that
> many little tables running around? Thanks for any input!
Yes, I see problems. Simply don't go there. This is not the way you use a
relational database. Make that machine name part of the PK in the single
table. If having one single table, slows things down, investigate why
instead of resorting to kludges.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp
|||You misread my post. I said 50-100 tables, not 50-100 rows.
"Aaron Bertrand [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
news:ewqxaaL3FHA.3636@.TK2MSFTNGP09.phx.gbl...
> You have 50-100 rows in the table and you think that slows it down? Even
> without a clustered index or any index at all, I find it hard to believe
> that you can perceive any slowness whatsoever based on a scan of rows in
> the three figure range. And to sacrifice manageability for that seems
> absurd, at least to me.
>
|||OK, point well taken. I don't see any workaround, though, since I'm using
ODBC linked tables. I can't use a pass-through query and pass the table name
as a parameter, because pass-throughs return read-only sets. And a view
wouldn't be able to return just the records for that machine.
Here's an idea. What if there were one table, but a series of views, with
each view returning records for a particular unique ID. When the app is
opened, it's assigned an available ID and its link is set to the view that
returns records for that ID. Still kind of a kludge, but perhaps a little
better.
"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns96FED3AD11D99Yazorman@.127.0.0.1...
> Neil (nospam@.nospam.net) writes:
> Yes, I see problems. Simply don't go there. This is not the way you use a
> relational database. Make that machine name part of the PK in the single
> table. If having one single table, slows things down, investigate why
> instead of resorting to kludges.
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techinf...2000/books.asp
>
|||You are sooooooo screwed up I have to use this in a book! I am looking
for anything you did right and I cannot find it.
[vbcol=seagreen]
field [sic] . The table is linked to the main table in a heterogeneous inner join[sic] . <<
Let's get back to the basics of an RDBMS. Rows are not records; fields
are not columns; tables are not files. These are FUNDAMENTALLY
DIFFERENT CONCEPTS. SQL does not have Boolean data types -- they woudl
screw up the 3VL that is the foundations of SQL's model.
[vbcol=seagreen]
Then these will not be the same table, will they? Duh!
[vbcol=seagreen]
Machine name? Have you ever read anything on data modeling? Logical
versus Physical? The basics!! The basics!!
[vbcol=seagreen]
Sure, split the LOGICAL design over the PHYSICAL implementation. To
hell with the foundations of RDBMS, Data Modeling , etc. You are doing
1950's tape system in SQL.
[vbcol=seagreen]
You have just implemented a design we found to be a nightmare in the
1950's with magnetic tape file names. It is one of the reasons we went
to disk systems and then to navigation database and then to RDBMs
systems.
[vbcol=seagreen]
a the ODBC link. <<
Back to basics. A table models a set of one kind of entity or a
relationship. Then schema models the reality of the data model in
those terms. Creating them on the fly is the logical equivalent of
having an elephant drp out of the sky.
[vbcol=seagreen]
Dr. Codd,. Chris Date, me, anyone in RDBMS?
Please get some help before you hurt people. Everything you are doing
is wrong
|||I don't think I did. But have fun.
"Neil" <nospam@.nospam.net> wrote in message
news:mgR8f.3506$yX2.132@.newsread2.news.pas.earthli nk.net...
> You misread my post. I said 50-100 tables, not 50-100 rows.
> "Aaron Bertrand [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in
> message news:ewqxaaL3FHA.3636@.TK2MSFTNGP09.phx.gbl...
>
|||You wrote: "You have 50-100 rows in the table and you think that slows it
down?"
I never wrote that I have 50-100 rows in the table. I said I was considering
50-100 tables.
"Aaron Bertrand [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
news:uJ2c5eV3FHA.3868@.TK2MSFTNGP12.phx.gbl...
>I don't think I did. But have fun.
>
>
> "Neil" <nospam@.nospam.net> wrote in message
> news:mgR8f.3506$yX2.132@.newsread2.news.pas.earthli nk.net...
>
|||"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns96FF73E7D7719Yazorman@.127.0.0.1...

> You will have to bear with me, since my knowledge of Access is so poor.
> But if I understand this correctly, you have a local table in Access
> with selections that typically has 50000 rows. And since each user
> has his own Access instance, this means that today there are some 25-50
> instances of this table.
Yes, each on its own machine.

> 13 seconds to open a form is indeed a long time, and many users would
> say a too long time.
> But moving this data to SQL server may not a very good idea at all.
> Sending
> 50000 rows over the wire is not done snap. On a local network it may be
> decently fast, but if you have a user that works from home, it will be
> a pain.
Yes, the situation is over a T1 line. The LAN users don't have any
significant delays. But the WAN users are getting long load times.

> So I would suggest that you should rather look into to load fewer rows
> into the form initially, and then fetch depending on what action the
> user takes. I can't believe that the user is looking at all 50000 at
> a time.
Well, they *work* with all 50,000, even if they don't use them. They like
being able to work in datasheet view (a spreadsheet-like representation of
data) and do sorting, filtering, editing, etc. In form view (the traditional
representation of data), I could give them one record at a time; but in
datasheet view they like to have all the records there.

> If this data is only related to the user's selection, it's probably a
> good idea to keep it local anyway. The only point I can see with moving
> it to the server, is that it could permit the user to get back his
> selection if he moves to another machine.
Well, the idea was to eliminate the heterogeneous join. And, indeed, I have
seen a performance increase in other areas of the form with the trial
back-end selections table I put in place (sorting on a field, for example,
is much faster with the selections table in the back end). The problem,
though, is that in this one area, the initial opening of the form, it
actually slows things down. And that's a key area, since users need to be
able to open the form quickly when they need the data.
Neil

> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techinf...2000/books.asp
>
|||Thanks for the tips.
Regarding HOST_NAME, yes, that would be better than passing a parameter. I
found, though, that even with the host name selected on the back end, it was
still slow with that large selections table. Even with SQL selecting the
needed 50,000 records to return (from the 1.25 mil record view), it was
still slower than having a 50,000 record table on the front end and having a
heterogeneous join with the 50,000 record table on the back end.
Neil
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:43649a11$0$23295$db0fefd9@.news.zen.co.uk...
> Hi
> I probably know less about access than Erland, but you may want to look at
> just what is happening on the SQL Server when you have everything on the
> server. SQL Profiler may give you some information that we may be missing
> and how each solution works. Also look at the Query Plans for the executed
> SQL and check your indexes are being used and that you have up-to-date
> statistics.
> You may be able to simplify things with the HOST_NAME function in your
> view.
> John
> "Neil" <nospam@.nospam.net> wrote in message
> news:cZU8f.3368$Rl1.2750@.newsread1.news.pas.earthl ink.net...
>

No comments:

Post a Comment