Search Results

Search found 30412 results on 1217 pages for 'spatial database'.

Page 281/1217 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • Importing Excel spreadsheet data into existing Access DB

    - by Keeb13r
    I've designed an Access 2003 DB with 3 tables: APPLICATIONS, SERVERS, and INSTALLATIONS. Records in the APPLICATIONS and SERVERS tables are uniquely identified by a synthetic primary key (in Access, an "auto number"). The INSTALLATIONS table is essentially a mapping table between APPLICATIONS and SERVERS: it's a list of records of which applications are installed on which servers. A record in the INSTALLATIONS table is also identified by a synthetic primary key, and it consists of an APPLICATION_ID and SERVER_ID for the records in their respective tables. I have an Excel 2003 spreadsheet I would like to import into this database, but it's proving difficult. The spreadsheet is made up of several tabs/worksheets, each one representing a server with its own listing of installed applications. I'm not sure how to proceed with an import - the "Get External Data -- Import" feature in Access has an import "In an Existing Table" option, but it's greyed out. I'm also unsure how I build the relationships between applications and servers for importing records into the INSTALLATIONS table. I had previously fooled around with adding some security to the Access DB file. I think I removed everything but perhaps I didn't and that's causing the problem? Some sample data from the Excel spreadsheet: SERVER101 * Adobe Reader 9 * BMC Remedy User 7.0 * HostExplorer 2008 * Microsoft Office 2003 * Microsoft Office 2007 * Notepad++ SERVER102 * Adobe Reader 9 * DameWare Mini Remote Control * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Oracle 9.2 SERVER103 * AWDView * EXTRA! Personal Client 32-bit * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Snagit 9.1 * WinZip 12.1 The Access DB design is very simple: APPLICATION * APPLICATION_ID (autonumber) * APPLICATION_NAME (varchar) SERVER * SERVER_ID (autonumber) * SERVER_NAME (varchar) INSTALLATION * INSTALLATION_ID (autonumber) * APPLICATION_ID (number) * SERVER_ID (number)

    Read the article

  • Most watched videos this week

    - by Jan Hancic
    I have a youtube like web-page where users upload&watch videos. I would like to add a "most watched videos this week" list of videos to my page. But this list should not contain just the videos that ware uploaded in the previous week, but all videos. I'm currently recording views in a column, so I have no information on when a video was watched. So now I'm searching for a solution to how to record this data. The first is the most obvious (and the correct one, as far as I know): have a separate table in which you insert a new line every time you want to record a new view (storing the ID of the video and the timestamp). I'm worried that I would quickly get huge amounts of data in this table, and queries using this table would be extremely slow (we get about 3 million views a month). The second solution isn't as flexible but is more easy on the database. I would add 7 columns to the "videos" table (one for each day of the week): views_monday, views_tuesday , views_wednesday, ... And increment the value in the correct column based on the day it is. And I would reset the current day's column to 0 at midnight. I could then easily get the most watched videos of the week by summing this 7 columns. What do you think, should I bother with the first solution or will the second one suffice for my case? If you have a better solution please share! Oh, I'm using MySQL.

    Read the article

  • Integration transport choice (Oracle + SQL Server)

    - by lak-b
    We have several systems with Oracle (A) and SQL Server (B) databases on backend. I have to consolidate data from those systems into the new SQL Server database. Something like that: (A) =>|---------------| | some software | => SQL Server (B) =>|---------------| where some software is: transport (A and B systems located in the network) processing business logic (custom .NET code) Due to first point, I need some queue software or something similar (like MSMQ, Service Broker or something). In another hand, I can implement a web-service instead of queue. (A) =>|---------------|-------------| | queue/service | custom code | => SQL Server (B) =>|---------------|-------------| The question is: which queue/transport framework should I use with Oracle and SQL Server databases? It would be nice, if I can post messages to MSMQ in both Oracle and SQL Server stored procedures (can I?) It would be nice, if I can call a web-service in both Oracle and SQL Server stored procedures (can I?) It would be nice, if I can use something similar in both Oracle and SQL Server stored procedures (what exactly?) What software should I prefer to my requirements?

    Read the article

  • Django forms I cannot save picture file

    - by dana
    i have the model: class OpenCv(models.Model): created_by = models.ForeignKey(User, blank=True) first_name = models.CharField(('first name'), max_length=30, blank=True) last_name = models.CharField(('last name'), max_length=30, blank=True) url = models.URLField(verify_exists=True) picture = models.ImageField(help_text=('Upload an image (max %s kilobytes)' %settings.MAX_PHOTO_UPLOAD_SIZE),upload_to='jakido/avatar',blank=True, null= True) bio = models.CharField(('bio'), max_length=180, blank=True) date_birth = models.DateField(blank=True,null=True) domain = models.CharField(('domain'), max_length=30, blank=True, choices = domain_choices) specialisation = models.CharField(('specialization'), max_length=30, blank=True) degree = models.CharField(('degree'), max_length=30, choices = degree_choices) year_last_degree = models.CharField(('year last degree'), max_length=30, blank=True,choices = year_last_degree_choices) lyceum = models.CharField(('lyceum'), max_length=30, blank=True) faculty = models.ForeignKey(Faculty, blank=True,null=True) references = models.CharField(('references'), max_length=30, blank=True) workplace = models.ForeignKey(Workplace, blank=True,null=True) the form: class OpencvForm(ModelForm): class Meta: model = OpenCv fields = ['first_name','last_name','url','picture','bio','domain','specialisation','degree','year_last_degree','lyceum','references'] and the view: def save_opencv(request): if request.method == 'POST': form = OpencvForm(request.POST, request.FILES) # if 'picture' in request.FILES: file = request.FILES['picture'] filename = file['filename'] fd = open('%s/%s' % (MEDIA_ROOT, filename), 'wb') fd.write(file['content']) fd.close() if form.is_valid(): new_obj = form.save(commit=False) new_obj.picture = form.cleaned_data['picture'] new_obj.created_by = request.user new_obj.save() return HttpResponseRedirect('.') else: form = OpencvForm() return render_to_response('opencv/opencv_form.html', { 'form': form, }, context_instance=RequestContext(request)) but i don't seem to save the picture in my database... something is wrong, and i can't figure out what :(

    Read the article

  • error in arabic script in mysql

    - by fusion
    i inserted data in mysql database which includes arabic script. while the output displays arabic correctly, the data in mysql looks like garbage. something like this: '&#1589;&#1614;&#1608;&#1605;&#1615; &#1579;&#1614;&#1604;&#1575;&#1579;&#1614;&#1577;&#1616; &#1571;&#1610;&#1617;&#1575;&#1605;&#1613; &#1605;&#1616;&#1606; &#1603;&#1615;&#1604;&#1617;&#1616; &#1588;&#1614;&#1607;&#1585;&#1613; &#1600; &#1571;&#1585;&#1576;&#1614;&#1593;&#1575;&#1569;&#1615; &#1576;&#1614;&#1610;&#1606;&#1614; &#1582;&#1614; should i be worried about this? if yes, how do i make it appear in proper arabic script in mysql? thanks.

    Read the article

  • Django forms I cannot save picture file

    - by dana
    i have the model: class OpenCv(models.Model): created_by = models.ForeignKey(User, blank=True) first_name = models.CharField(('first name'), max_length=30, blank=True) last_name = models.CharField(('last name'), max_length=30, blank=True) url = models.URLField(verify_exists=True) picture = models.ImageField(help_text=('Upload an image (max %s kilobytes)' %settings.MAX_PHOTO_UPLOAD_SIZE),upload_to='jakido/avatar',blank=True, null= True) bio = models.CharField(('bio'), max_length=180, blank=True) date_birth = models.DateField(blank=True,null=True) domain = models.CharField(('domain'), max_length=30, blank=True, choices = domain_choices) specialisation = models.CharField(('specialization'), max_length=30, blank=True) degree = models.CharField(('degree'), max_length=30, choices = degree_choices) year_last_degree = models.CharField(('year last degree'), max_length=30, blank=True,choices = year_last_degree_choices) lyceum = models.CharField(('lyceum'), max_length=30, blank=True) faculty = models.ForeignKey(Faculty, blank=True,null=True) references = models.CharField(('references'), max_length=30, blank=True) workplace = models.ForeignKey(Workplace, blank=True,null=True) objects = OpenCvManager() the form: class OpencvForm(ModelForm): class Meta: model = OpenCv fields = ['first_name','last_name','url','picture','bio','domain','specialisation','degree','year_last_degree','lyceum','references'] and the view: def save_opencv(request): if request.method == 'POST': form = OpencvForm(request.POST, request.FILES) # if 'picture' in request.FILES: file = request.FILES['picture'] filename = file['filename'] fd = open('%s/%s' % (MEDIA_ROOT, filename), 'wb') fd.write(file['content']) fd.close() if form.is_valid(): new_obj = form.save(commit=False) new_obj.picture = form.cleaned_data['picture'] new_obj.created_by = request.user new_obj.save() return HttpResponseRedirect('.') else: form = OpencvForm() return render_to_response('opencv/opencv_form.html', { 'form': form, }, context_instance=RequestContext(request)) but i don't seem to save the picture in my database... something is wrong, and i can't figure out what :(

    Read the article

  • Where can I find my iPhone app's Core Data persistent store?

    - by Dr Dork
    I'm diving into iPhone development, so I apologize in advance if this is a ridiculous question, but in a new iPad app project using the Core Data framework, here's the generated code for creating the persistentStoreCoordinator... - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"ApplicationName.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. Typical reasons for an error here include: * The persistent store is not accessible * The schema for the persistent store is incompatible with current managed object model Check the error message to determine what the actual problem was. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; } My questions are... The first time I run the app, is the ApplicationName.sqllite database created automatically if it doesn't exist? If not, when is it created? When data is added to it programmatically? Once the DB does exist, where can I locate the file? I'd like to open it with a different program so I can manually manipulate the data. Thanks so much in advance for your help! I'm going to continue researching these questions right now.

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records) and PHP Code.

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • Echoing users by group name in PHP

    - by BobSapp
    What I want to do is click a name of a group(every group what I create other than poweruser and admin groups) and that will echo all of the users in that group from the database. I have figured out the code so far but now my problem is how will I print it all out when clicking the name of the group? My code so far is: <h3>Groups</h3> <?php include('db.php'); if (isset($_GET["groupID"])) { $sql="SELECT `group`.*, `user`.* FROM `user` inner join `group` on group.groupID=user.groupID where group.groupID= " . mysql_real_escape_string($_GET["groupID"]) ; } else { $sql="SELECT * FROM `group` WHERE groupName <> 'admin' AND groupName <> 'poweruser'" ; } $result=mysql_query($sql,$connection); while($line=mysql_fetch_array($result)){ echo "<a href='index.php?page=groups&group=".$line['groupID']."'>".$line['groupName'].'</a><br />'; } mysql_free_result($result); mysql_close($connection); ?>

    Read the article

  • Get the first and last posts in a thread

    - by Grampa
    I am trying to code a forum website and I want to display a list of threads. Each thread should be accompanied by info about the first post (the "head" of the thread) as well as the last. My current database structure is the following: threads table: id - int, PK, not NULL, auto-increment name - varchar(255) posts table: id - int, PK, not NULL, auto-increment thread_id - FK for threads The tables have other fields as well, but they are not relevant for the query. I am interested in querying threads and somehow JOINing with posts so that I obtain both the first and last post for each thread in a single query (with no subqueries). So far I am able to do it using multiple queries, and I have defined the first post as being: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id LIMIT 0, 1 The last post is pretty much the same except for ORDER BY id DESC. Now, I could select multiple threads with their first or last posts, by doing: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id GROUP BY t.id But of course I can't get both at once, since I would need to sort both ASC and DESC at the same time. What is the solution here? Is it even possible to use a single query? Is there any way I could change the structure of my tables to facilitate this? If this is not doable, then what tips could you give me to improve the query performance in this particular situation?

    Read the article

  • How do software projects go over budget and under-deliver?

    - by Carlos
    I've come across this story quite a few times here in the UK: NHS Computer System Summary: We're spunking £12 Billion on some health software with barely anything working. I was sitting the office discussing this with my colleagues, and we had a little think about. From what I can see, all the NHS needs is a database + middle tier of drugs/hospitals/patients/prescriptions objects, and various GUIs for doctors and nurses to look at. You'd also need to think about security and scalability. And you'd need to sit around a hospital/pharmacy/GPs office for a bit to figure out what they need. But, all told, I'd say I could knock together something with that kind of structure in a couple of days, and maybe throw in a month or two to make it work in scale. If I had a few million quid, I could probably hire some really excellent designers to make a maintainable codebase, and also buy appropriate hardware to run the system on. I hate to trivialize something that seems to have caused to much trouble, but to me it looks like just a big distributed CRUD + UI system. So how on earth did this project bloat to £12B without producing much useful software? As I don't think the software sounds so complicated, I can only imagine that something about how it was organised caused this mess. Is it outsourcing that's the problem? Is it not getting the software designers to understand the medical business that caused it? What are your experiences with projects gone over budget, under delivered? What are best practices for large projects? Have you ever worked on such a project?

    Read the article

  • SELECT product from subclass: How many queries do I need?

    - by Stefano
    I am building a database similar to the one described here where I have products of different type, each type with its own attributes. I report a short version for convenience product_type ============ product_type_id INT product_type_name VARCHAR product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id ... (common attributes to all product) magazine ======== magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id ... (magazine-specific attributes) web_site ======== web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id ... (web-site specific attributes) This way I do not need to make a huge table with a column for each attribute of different product types (most of which will then be NULL) How do I SELECT a product by product.product_id and see all its attributes? Do I have to make a query first to know what type of product I am dealing with and then, through some logic, make another query to JOIN the right tables? Or is there a way to join everything together? (if, when I retrieve the information about a product_id there are a lot of NULL, it would be fine at this point). Thank you

    Read the article

  • SQL left join with multiple rows into one row

    - by beardedd
    Basically, I have two tables, Table A contains the actual items that I care to get out, and Table B is used for language translations. So, for example, Table A contains the actual content. Anytime text is used within the table, instead of storing actual varchar values, ids are stored that relate back to text stored in Table B. This allows me to by adding a languageID column to Table B, have multiple translations for the same row in the database. Example: Table A Title (int) Description (int) Other Data.... Table B TextID (int) - This is the column whose value is stored in other tables LanguageID (int) Text (varchar) My question is more a call for suggestions on how to best handle this. Ideally I want a query that I can use to select from the table, and get the text as opposed to the ids of the text out of the table. Currently when I have two text items in the table this is what I do: SELECT C.ID, C.Title, D.Text AS Description FROM (SELECT A.ID, A.Description, B.Text AS Title FROM TableA A, TranslationsTable B WHERE A.Title = B.TextID AND B.LanguaugeID = 1) C LEFT JOIN TranslationsTable D ON C.Description = D.TextID AND D.LanguaugeID = 1 This query gives me the row from Table A I am looking for (using where statements in the inner select statement) with the actual text based on the language ID used instead of the text ids. This works fine when I am only using one or two text items that need to be translated, but adding a third item or more, it starts to get really messy - essentially another left join on top of the example. Any suggestions on a better query, or at least a good way to handle 3 or more text items in a single row?

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • Connection string problems on shared hosting with sql server 2005 express

    - by dagogo
    hi i have problem connecting to my db on a shared hosting, my host provider says they deployed sql 2005 express on their database and i prepared my connection string as follows to take advantage of sql express. \ the data source nae i used originally was ./SQLExpress but my host provider asked that i change it to local host, although with the former it didnt connect, but still with the change as indicated above the error still comes up on access to my default page. the error is as follows; Server Error in '/' Application. Invalid value for key 'attachdbfilename'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentException: Invalid value for key 'attachdbfilename'. Source Error: Line 120: Public Function GetID(ByVal sLgaName As String) As Integer Line 121: Dim q As String = "Select PLID " & "From LGA " & "Where LGAName = " & "'" & sLgaName & "'" Line 122: Dim cn As New SqlConnection(Me.ConnectionString) Line 123: Dim cmd As New SqlCommand(q, cn) Line 124: ive read up a lot on the web and googled ma fingers numb on this, i have a deadline to deliver this project and having successfully built the app it frustrating for this to happen. pls help me.

    Read the article

  • .tpl files and website problem

    - by whitstone86
    Apologies if the title is in lowercase but it's describing an extension format. I've started using Dwoo as my template engine for PHP, and am not sure how to convert my PHP files into .tpl templates. My site is similar to, but not the same as, http://library.digiguide.com/lib/programme/Medium-319648/Drama/ with its design (except colour scheme and site name are different, plus it's in PHP - so copyright issues are avoided here, the design arguably could be seen as parody even though the content is different. The database is called tvguide, and it has these tables: Programmes House M.D. Medium Police Stop! American Dad! The tablenames of the above programmes are: housemdonair mediumonair policestopair americandad1 Episodes The tablenames for the above programmes' episode guides are: housemdepidata mediumepidata policestopepidata americandad1epidata All of them have the following rows: id (not an auto-increment, since I wish to dynamically generate a page from this) episodename seriesnumber episodenumber episodesynopsis (the above four after id do exactly as stated) I have a pagination script that works, it displays 20 records per page as I want it to. This is called pmcPagination.php - but I won't post it in full since it would take up too much space. However, I'm trying to get it so that variables are filled in like this: (ok, so the examples below are ASP.NET, but if there's a PHP/MySQL equivalent I would gratefully appreciate this!!): http://library.digiguide.com/lib/episode/741168 http://library.digiguide.com/lib/episode/714829 with the episode detail and data. My site works, but it's fairly basic, and it's not online yet until my bugs are fixed. Mod_rewrite is enabled so my site reads as http://mytvguide.com/episode/123456 or http://mytvguide.com/programme/123456 http://mytvguide.com/WorldInAction/123456/Documentary/ I've tried looking on Google, but am not sure how to get this TV guide script to work at its best - but I think .tpl, and .php/MySQL is the way to go. Any advice anyone has on making this project into a fully workable, ready to use site would be much appreciated, I've spent months refining this project! P.S. Apologies for the length of this, hope it describes my project well.

    Read the article

  • Get info from multiple files, match it and then display to end user, what is fastest?

    - by Patrick
    Hi, I need to build a website where we display data that is refreshed every 5minutes in a text file with a | separator. I currently use Java to do this. What I do now: I grab the textfile for every request through the website and process it and then display the data to the end user, this works fine, since Java can go through like 5000 lines of data fast, and when I filter it it is still extremely fast. However now the management wants the following: They added 3 textfiles with the | separator to it, and now want me to also read those files and match the information on certain fields, and if there is a match also display that information to the end user. I think soon enough, although Java is fast, I will run into trouble when 10 people want that information and I have to run through 4 total files matching the information. What can I do to make this process super fast? My Creative solutions so far: -Leave it this way, since Java is fast and end users can wait (probably less then 1second) -Have a background process that dumps new data into a MySQL database every 5minutes, since databases are extremely good at getting the same data from multiple tables. Thank you!

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • Updating a composite primary key

    - by VBCSharp
    I am struggling with the philosophical discussions about whether or not to use composite primary keys on my SQL Server database. I have always used the surrogate keys in the past and I am challenging myself by leaving my comfort zone to try something different. I have read many discussion but can't come to any kind of solution yet. The struggle I am having is when I have to update a record with the composite PK. For example, the record in questions is like this: ContactID, RoleID, EffectiveDate, TerminationDT. The PK in this case is the ContactID, RoleID, and EffectiveDate. TerminationDT can be null. If in my UI, the user changes the RoleID and then I need to update the record. Using the surrogate key I can do an Update Table Set RoleID = 1 WHERE surrogateID = Z. However, using the Composite Key way, once one of the fields in the composite key changes I have no way to reference the old record to update it without now maintaining somewhere in the UI a reference to the old values. I do not bind datasources in my UI. I open a connection, get the data and store it in a bucket, then close the connection. What are everyone's opinions? Thanks.

    Read the article

  • Pre-done SQLs to be converted to Rails' style moduls

    - by Hoornet
    I am a Rails newbie and would really appreciate if someone converted these SQLs to complete modules for rails. I know its a lot to ask but I can't just use find_by_sql for all of them. Or can I? These are the SQLs (they run on MS-SQL): 1) SELECT STANJA_NA_DAN_POSTAVKA.STA_ID, STP_DATE, STP_TIME, STA_OPIS, STA_SIFRA, STA_POND FROM STANJA_NA_DAN_POSTAVKA INNER JOIN STANJA_NA_DAN ON(STANJA_NA_DAN.STA_ID=STANJA_NA_DAN_POSTAVKA.STA_ID) WHERE ((OSE_ID=10)AND (STANJA_NA_DAN_POSTAVKA.STP_DATE={d '2010-03-30'}) AND (STANJA_NA_DAN_POSTAVKA.STP_DATE<={d '2010-03-30'})) 2) SELECT ZIGI_OBDELANI.OSE_ID, ZIGI_OBDELANI.DOG_ID AS DOG_ID, ZIGI_OBDELANI.ZIO_DATUM AS DATUM, ZIGI_PRICETEK.ZIG_TIME_D AS ZIG_PRICETEK, ZIGI_KONEC.ZIG_TIME_D AS ZIG_KONEC FROM (ZIGI_OBDELANI INNER JOIN ZIGI ZIGI_PRICETEK ON ZIGI_OBDELANI.ZIG_ID_PRICETEK = ZIGI_PRICETEK.ZIG_ID) INNER JOIN ZIGI ZIGI_KONEC ON ZIGI_OBDELANI.ZIG_ID_KONEC = ZIGI_KONEC.ZIG_ID WHERE (ZIGI_OBDELANI.OSE_ID = 10) AND (ZIGI_OBDELANI.ZIO_DATUM = {d '2010-03-30'}) AND (ZIGI_OBDELANI.ZIO_DATUM <= {d '2010-03-30'}) AND (ZIGI_PRICETEK.ZIG_VELJAVEN < 0) AND (ZIGI_KONEC.ZIG_VELJAVEN < 0) ORDER BY ZIGI_OBDELANI.OSE_ID, ZIGI_PRICETEK.ZIG_TIME ASC 3) SELECT STA_ID, SUM(STP_TIME) AS SUM_STP_TIME, COUNT(STA_ID) FROM STANJA_NA_DAN_POSTAVKA WHERE ((STP_DATE={d '2010-03-30'}) AND (STP_DATE<={d '2010-03-30'}) AND (STA_ID=3) AND (OSE_ID=10)) GROUP BY STA_ID 4) SELECT DATUM, TDN_ID, TDN_OPIS, URN_OPIS, MOZNI_PROBLEMI, PRIHOD, ODHOD, OBVEZNOST, ZAKLJUCEVANJE_DATUM FROM OBRACUNAJ_DAN WHERE ((OSE_ID=10) AND (DATUM={d '2010-02-28'}) AND (DATUM<={d '2010-03-30'})) ORDER BY DATUM These SQLs are daily working hours and I got them as is. Also I got Database with it which (as you can see from the SQL-s) is not in Rails conventions. As a P.S.: 1)Things like STP_DATE={d '2010-03-30'}) are of course dates (in Slovenian date notation) and will be replaced with a variable (date), so that the user could choose date from and date to. 2) All of this data will be shown in the same page in the table,so maybe all in one module? Or many?; if this helps, maybe. So can someone help me? Its for my work and its my 1st project and I am a Rails newbie and the bosses are getting inpatient(they are getting quite loud actually) Thank you very very much!

    Read the article

  • which sql consumes less memory

    - by prmatta
    Yesterday I asked a question on how to re-write sql to do selects and inserts in batches. I needed to do this to try and consume less virtual memory, since I need to move millions of rows here. The object is to move rows from Table B into Table A. Here are the ways I can think of doing this: SQL #1) INSERT INTO A (x, y, z) SELECT x, y, z FROM B b WHERE ... SQL #2) FOREACH SELECT x,y,z FROM B b WHERE ... INSERT INTO A(x,y,z); END FOREACH; SQL #3) FOREACH SELECT FIRST 2000 x,y,z FROM B b WHERE ... INSERT INTO A(x,y,z); END FOREACH; SQL #4) FOREACH SELECT FIRST 2000 x,y,z FROM B b WHERE ... AND NOT EXISTS IN (SELECT * FROM A) INSERT INTO A(x,y,z); END FOREACH; Are any of the above incorrect? The database is informix 11.5.

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • MS SQL - High performance data inserting with stored procedures

    - by Marks
    Hi. Im searching for a very high performant possibility to insert data into a MS SQL database. The data is a (relatively big) construct of objects with relations. For security reasons i want to use stored procedures instead of direct table access. Lets say i have a structure like this: Document MetaData User Device Content ContentItem[0] SubItem[0] SubItem[1] SubItem[2] ContentItem[1] ... ContentItem[2] ... Right now I think of creating one big query, doing somehting like this (Just pseudo-code): EXEC @DeviceID = CreateDevice ...; EXEC @UserID = CreateUser ...; EXEC @DocID = CreateDocument @DeviceID, @UserID, ...; EXEC @ItemID = CreateItem @DocID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... ... But is this the best solution for performance? If not, what would be better? Split it into more querys? Give all Data to one big stored procedure to reduce size of query? Any other performance clue? I also thought of giving multiple items to one stored procedure, but i dont think its possible to give a non static amount of items to a stored procedure. Since 'INSERT INTO A VALUES (B,C),(C,D),(E,F) is more performant than 3 single inserts i thought i could get some performance here. Thanks for any hints, Marks

    Read the article

  • Multiple Foreign keys to a single table and single key pointing to more than one table

    - by user1216775
    I need some suggestions from the database design experts here. I have around six foreign keys into a single table (defect) which all point to primary key in user table. It is like: defect (.....,assigned_to,created_by,updated_by,closed_by...) If I want to get information about the defect I can make six joins. Do we have any better way to do it? Another one is I have a states table which can store one of the user-defined set of values. I have defect table and task table and I want both of these tables to share the common state table (New, In Progress etc.). So I created: task (.....,state_id,type_id,.....) defect(.....,state_id,type_id,...) state(state_id,state_name,...) importance(imp_id,imp_name,...) There are many such common attributes along with state like importance(normal, urgent etc), priority etc. And for all of them I want to use same table. I am keeping one flag in each of the tables to differentiate task and defect. What is the best solution in such a case? If somebody is using this application in health domain, they would like to assign different types, states, importances for their defect or tasks. Moreover when a user selects any project I want to display all the types,states etc under configuration parameters section.

    Read the article

  • Customer wants some data to appear after you later delete rows. System giant / not my creation. Fast

    - by John Sullivan
    This is a fairly common problem, it probably has a name, I just don't know what it is. A.) User sees obscure piece of information in Row B of L_OBSCURE_INFO displayed on some screen at a certain point. It is in table L_Obscure_info. B.) Under certain circumstances we want to correctly delete data in L_OBSCURE_INFO. Unfortunately, nobody accounted for the fact that the user might want to backtrack and see some random piece of information that was most recently in L_OBSCURE_INFO. C.) The system is enormous and L_OBSCURE_INFO is used all the time. You have no idea what the ramifications are of implementing some kind of hack and whatever you do, you don't want to introduce more bugs. I think the best approach would be to create an L_OBSCURE_INFO_HISTORY table and record a record in there every time you change data. But god help your ensuring it's accurate in this system where L_OBSCURE_INFO is being touched everywhere and you don't have time to implement L_OBSCURE_INFO_HISTORY. Is there a particularly easy, clever design solution for this kind of problem -- basically an elegant database hack? If not, is this kind of design problem under a particular class of problems or have a name?

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >