Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 729/1300 | < Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >

  • Why is iPdb not displaying STOUT after my input?

    - by BryanWheelock
    I can't figure out why ipdb is not displaying stout properly. I'm trying to debug why a test is failing and so I attempt to use ipdb debugger. For some reason my Input seems to be accepted, but the STOUT is not displayed until I (c)ontinue. Is this something broken in ipdb? It makes it very difficult to debug a program. Below is an example ipdb session, notice how I attempt to display the values of the attributes with: user.is_authenticated() user_profile.reputation user.is_superuser The results are not displayed until 'begin captured stdout ' In [13]: !python manage.py test Creating test database... < SNIP remove loading tables nosetests ...E.. /Users/Bryan/work/APP/forum/auth.py(93)can_retag_questions() 92 import ipdb; ipdb.set_trace() ---> 93 return user.is_authenticated() and ( 94 RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or user.is_authenticated() user_profile.reputation user.is_superuser c F /Users/Bryan/work/APP/forum/auth.py(93)can_retag_questions() 92 import ipdb; ipdb.set_trace() ---> 93 return user.is_authenticated() and ( 94 RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or c .....EE...... FAIL: test_can_retag_questions (APP.forum.tests.test_views.AuthorizationFunctionsTestCase) Traceback (most recent call last): File "/Users/Bryan/work/APP/../APP/forum/tests/test_views.py", line 71, in test_can_retag_questions self.assertTrue(auth.can_retag_questions(user)) AssertionError: -------------------- begin captured stdout << --------------------- ipdb True ipdb 4001 ipdb False ipdb --------------------- end captured stdout << ---------------------- Ran 20 tests in 78.032s FAILED (errors=3, failures=1) Destroying test database... In [14]: Here is the actual test I'm trying to run: def can_retag_questions(user): """Determines if a User can retag Questions.""" user_profile = user.get_profile() import ipdb; ipdb.set_trace() return user.is_authenticated() and ( RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or user.is_superuser) I've also tried to use pdb, but that doesn't display anything. I see my test progress .... , and then nothing and not responsive to keyboard input. Is this a problem with readline?

    Read the article

  • Django's post_save signal behaves weirdly with models using multi-table inheritance

    - by hekevintran
    Django's post_save signal behaves weirdly with models using multi-table inheritance I am noticing an odd behavior in the way Django's post_save signal works when using a model that has multi-table inheritance. I have these two models: class Animal(models.Model): category = models.CharField(max_length=20) class Dog(Animal): color = models.CharField(max_length=10) I have a post save callback called echo_category: def echo_category(sender, **kwargs): print "category: '%s'" % kwargs['instance'].category post_save.connect(echo_category, sender=Dog) I have this fixture: [ { "pk": 1, "model": "animal.animal", "fields": { "category": "omnivore" } }, { "pk": 1, "model": "animal.dog", "fields": { "color": "brown" } } ] In every part of the program except for in the post_save callback the following is true: from animal.models import Dog Dog.objects.get(pk=1).category == u'omnivore' # True When I run syncdb and the fixture is installed, the echo_category function is run. The output from syncdb is: $ python manage.py syncdb --noinput Installing json fixture 'initial_data' from '~/my_proj/animal/fixtures'. category: '' Installed 2 object(s) from 1 fixture(s) The weird thing here is that the dog object's category attribute is an empty string. Why is it not 'omnivore' like it is everywhere else? As a temporary (hopefully) workaround I reload the object from the database in the post_save callback: def echo_category(sender, **kwargs): instance = kwargs['instance'] instance = sender.objects.get(pk=instance.pk) print "category: '%s'" % instance.category post_save.connect(echo_category, sender=Dog) This works but it is not something I like because I must remember to do it when the model inherits from another model and it must hit the database again. The other weird thing is that I must do instance.pk to get the primary key. The normal 'id' attribute does not work (I cannot use instance.id). I do not know why this is. Maybe this is related to the reason why the category attribute is not doing the right thing?

    Read the article

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • calling a stored proc over a dblink

    - by neesh
    I am trying to call a stored procedure over a database link. The code looks something like this: declare symbol_cursor package_name.record_cursor; symbol_record package_name.record_name; begin symbol_cursor := package_name.function_name('argument'); loop fetch symbol_cursor into symbol_record; exit when symbol_cursor%notfound; -- Do something with each record here, e.g.: dbms_output.put_line( symbol_record.field_a ); end loop; CLOSE symbol_cursor; When I run this from the same DB instance and schema where package_name belongs to I am able to run it fine. However, when I run this over a database link, (with the required modification to the stored proc name, etc) I get an oracle error: ORA-24338: statement handle not executed. The modified version of this code over a dblink looks like this: declare symbol_cursor package_name.record_cursor@db_link_name; symbol_record package_name.record_name@db_link_name; begin symbol_cursor := package_name.function_name@db_link_name('argument'); loop fetch symbol_cursor into symbol_record; exit when symbol_cursor%notfound; -- Do something with each record here, e.g.: dbms_output.put_line( symbol_record.field_a ); end loop; CLOSE symbol_cursor;

    Read the article

  • How to accurately parse smtp message status code (DSN)?

    - by Geo
    RFC1893 claims that status codes will come in the format below you can read more here. But our bounce management system is having a hard time parsing error status code from bounce messages. We are able to get the raw message, but depending on the email server the code will come in different places. Is there any rule on how to parse this type of messages to obtain better results. We are not looking for the 100% solution but at least 80%. This document defines a new set of status codes to report mail system conditions. These status codes are intended to be used for media and language independent status reporting. They are not intended for system specific diagnostics. The syntax of the new status codes is defined as: status-code = class "." subject "." detail class = "2"/"4"/"5" subject = 1*3digit detail = 1*3digit White-space characters and comments are NOT allowed within a status- code. Each numeric sub-code within the status-code MUST be expressed without leading zero digits. The quote above from the RFC tells one thing but then the text below from a leading tool on bounce management says something different, where I can get a good source of standard status codes: Return Code Description 0 UNDETERMINED - (ie. Recipient Reply) 10 HARD BOUNCE - (ie. User Unknown) 20 SOFT BOUNCE - General 21 SOFT BOUNCE - Dns Failure 22 SOFT BOUNCE - Mailbox Full 23 SOFT BOUNCE - Message Too Large 30 BOUNCE - NO EMAIL ADDRESS. VERY RARE! 40 GENERAL BOUNCE 50 MAIL BLOCK - General 51 MAIL BLOCK - Known Spammer 52 MAIL BLOCK - Spam Detected 53 MAIL BLOCK - Attachment Detected 54 MAIL BLOCK - Relay Denied 60 AUTO REPLY - (ie. Out Of Office) 70 TRANSIENT BOUNCE 80 SUBSCRIBE Request 90 UNSUBSCRIBE/REMOVE Request 100 CHALLENGE-RESPONSE

    Read the article

  • MS SQL datetime precision problem

    - by Nailuj
    I have a situation where two persons might work on the same order (stored in an MS SQL database) from two different computers. To prevent data loss in the case where one would save his copy of the order first, and then a little later the second would save his copy and overwrite the first, I've added a check against the lastSaved field (datetime) before saving. The code looks roughly like this: private bool orderIsChangedByOtherUser(Order localOrderCopy) { // Look up fresh version of the order from the DB Order databaseOrder = orderService.GetByOrderId(localOrderCopy.Id); if (databaseOrder != null && databaseOrder.LastSaved > localOrderCopy.LastSaved) { return true; } else { return false; } } This works for most of the time, but I have found one small bug. If orderIsChangedByOtherUser returns false, the local copy will have its lastSaved updated to the current time and then be persisted to the database. The value of lastSaved in the local copy and the DB should now be the same. However, if orderIsChangedByOtherUser is run again, it sometimes returns true even though no other user has made changes to the DB. When debugging in Visual Studio, databaseOrder.LastSaved and localOrderCopy.LastSaved appear to have the same value, but when looking closer they some times differ by a few milliseconds. I found this article with a short notice on the millisecond precision for datetime in SQL: Another problem is that SQL Server stores DATETIME with a precision of 3.33 milliseconds (0. 00333 seconds). The solution I could think of for this problem, is to compare the two datetimes and consider them equal if they differ by less than say 10 milliseconds. My question to you is then: are there any better/safer ways to compare two datetime values in MS SQL to see if they are exactly the same?

    Read the article

  • Rails: Custom template for email "deliver_" method?

    - by neezer
    I'm building an email system that stores my different emails in the database and calls the appropriate "deliver_" method via method_missing (since I can't explicitly declare methods since they're user-generated). My problem is that my rails app still tries to render the template for whatever the generated email is, though those templates don't exist. I want to force all emails to use the same template (views/test_email.html.haml), which will be setup to draw their formatting from my database records. How can I accomplish this? I tried adding render :template => 'test_email' in the test_email method in emailer_controller with no luck. models/emailer.rb: class Emailer < ActionMailer::Base def method_missing(method, *args) # not been implemented yet logger.info "method missing was called!!" end end controller/emailer_controller.rb: class EmailerController < ApplicationController def test_email @email = Email.find(params[:id]) Emailer.send("deliver_#{@email.name}") end end views/emails/index.html.haml: %h1 Listing emails %table{ :cellspacing => 0 } %tr %th Name %th Subject - @emails.each do |email| %tr %td=h email.name %td=h email.subject %td= link_to 'Show', email %td= link_to 'Edit', edit_email_path(email) %td= link_to 'Send Test Message', :controller => 'emailer', :action => 'test_email', :params => { :id => email.id } %td= link_to 'Destroy', email, :confirm => 'Are you sure?', :method => :delete %p= link_to 'New email', new_email_path Error I'm getting with the above: Template is missing Missing template emailer/name_of_email_in_database.erb in view path app/views

    Read the article

  • How to estimate size of data to transfer when using DbCommand.ExecuteXXX?

    - by Yadyn
    I want to show the user detailed progress information when performing potentially lengthy database operations. Specifically, when inserting/updating data that may be on the order of hundreds of KB or MB. Currently, I'm using in-memory DataTables and DataRows which are then synced with the database via TableAdapter.Update calls. This works fine and dandy, but the single call leaves little opportunity to glean any kind of progress info to show to the user. I have no idea how much data is passing through the network to the remote DB or its progress. Basically, all I know is when Update returns and it is assumed complete (barring any errors or exceptions). But this means all I can show is 0% and then a pause and then 100%. I can count the number of rows, even going so far to cound how many are actually Modified or Added, and I could even maybe calculate per DataRow its estimated size based on the datatype of each column, using sizeof for value types like int and checking length for things like strings or byte arrays. With that, I could probably determine, before updating, an estimated total transfer size, but I'm still stuck without any progress info once Update is called on the TableAdapter. Am I stuck just using an indeterminate progress bar or mouse waiting cursor? Would I need to radically change our data access layer to be able to hook into this kind of information? Even if I can't get it down to the precise KB transferred (like a web browser file download progress bar), could I at least know when each DataRow/DataTable finishes or something? How do you best show this kind of progress info using ADO.NET?

    Read the article

  • How to form submit and show a different page in ASP.Net MVC?

    - by melaos
    hi guys i'm new to asp.net mvc.. so basically i just build up a two page app which takes the registration information of the user and post it to the database. i use a lot of jquery and ajax calls to retrieve data from the database using linq to sql stored proc object. and currently i'm stuck at one page where after the user submits the form it should redirect him to /Home/AddProduct. What i found was the error: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. what used on my form are basically a combination of html controls, asp.net controls and some asp.net mvc type controls. i submit the form using action="/Home/ProductAdded" and after doing some googling i found i was supposed to add in the machine key but after doing so, the index page becomes unviewable. because it couldn't find the index file now. removing the action helps, but now it just doesn't go anywhere. so what am i missing here? i feel i'm missing a lot of fundamentals understanding about asp.net mvc and i don't even know how to submit a form and go to a different page here!!

    Read the article

  • Entity Framework - Foreign key constraints not added for inherited entity

    - by Tri Q
    Hello, It appears to me that a strange phenomenon is occurring with inherited entities (TPT) in EF4. I have three entities. 1. Asset 2. Property 3. Activity Property is a derived-type of Asset. Property has many activities (many-to-many) When modeling this in my EDMX, everything seems fine until I try to insert a new Property into the database. If the property does not contain any Activity, it works, but all hell breaks loose when I add some new activities to the new Property. As it turns out after 2 days of crawling the web and fiddling around, I noticed that in the EF store (SSDL) some of the constraints between entities were not picked up during the update process. Property_Activity table which links properties and activities show only one constraint FK_Property_Activity_Activity but FK_Property_Activity_Property was missing. I knew this is an Entity Framework anomoly because when I switched the relationship in the database to: Asset <-- Asset_Activity <-- Activity After an update, all foreign key constraints are picked up and the save is successful, with or without activities in the new property. Is this intended or a bug in EF? How do I get around this problem? Should I abandon inheritance altogether?

    Read the article

  • Mysqli query only works on localhost, not webserver

    - by whamo
    Hello. I have changed some of my old queries to the Mysqli framework to improve performance. Everything works fine on localhost but when i upload it to the webserver it outputs nothing. After connecting I check for errors and there are none. I also checked the php modules installed and mysqli is enabled. I am certain that it creates a connection to the database as no errors are displayed. (when i changed the database name string it gave the error) There is no output from the query on the webserver, which looks like this: $mysqli = new mysqli("server", "user", "password"); if (mysqli_connect_errno()) { printf("Can't connect Errorcode: %s\n", mysqli_connect_error()); exit; } // Query used $query = "SELECT name FROM users WHERE id = ?"; if ($stmt = $mysqli->prepare("$query")) { // Specify parameters to replace '?' $stmt->bind_param("d", $id); $stmt->execute(); // bind variables to prepared statement $stmt->bind_result($_userName); while ($stmt->fetch()) { echo $_userName; } $stmt-close(); } } //close connection $mysqli-close(); As I said this code works perfectly on my localserver just not online. Checked the error logs and there is nothing so everything points to a good connection. All the tables exists as well etc. Anyone any ideas because this one has me stuck! Also, if i get this working, will all my other queries still work? Or will i need to make them use the mysqli framework as well? Thanks in advance.

    Read the article

  • Proper use of the IDisposable interface

    - by cwick
    I know from reading the MSDN documentation that the "primary" use of the IDisposable interface is to clean up unmanaged resources http://msdn.microsoft.com/en-us/library/system.idisposable.aspx. To me, "unmanaged" means things like database connections, sockets, window handles, etc. But, I've seen code where the Dispose method is implemented to free managed resources, which seems redundant to me, since the garbage collector should take care of that for you. For example: public class MyCollection : IDisposable { private List<String> _theList = new List<String>(); private Dictionary<String, Point> _theDict = new Dictionary<String, Point>(); // Die, you gravy sucking pig dog! public void Dispose() { _theList.clear(); _theDict.clear(); _theList = null; _theDict = null; } My question is, does this make the garbage collector free memory used by MyCollection any faster than it normally would? edit: So far people have posted some good examples of using IDisposable to clean up unmanaged resources such as database connections and bitmaps. But suppose that _theList in the above code contained a million strings, and you wanted to free that memory now, rather than waiting for the garbage collector. Would the above code accomplish that?

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • trackPageView on Google Analytics for iPhone Not Working

    - by DigitalZombieKid
    I'm trying to get Google Analytics working on an iPhone application without much luck. I've followed all the instructions on their website (google/apis/analytics/docs/tracking/mobileAppsTracking.html) and studied their sample application (google/gaformobileapps/GoogleAnalyticsIphone_0.7.tar.gz). When I run my application and go to Google Analytics' website (https://www.google.com/analytics/reporting/), the only page that is recording is /app_entry_point. I'm seeing one count in my Google Analytics detailed report once every time my app fires up. However, I have added other pages to be tracked but it's not working. Here is a sample of two pages I've added to be tracked: trackPageview:@"/calculator"; trackPageview:@"/tellafriend"; I call them from various ViewControllers in the app. In each of those view controllers I import the the GANTracker header: #import "GANTracker.h" I'll admit it: I'm an objective-c newbie. Any help you can offer is greatly appreciated! Do I need to physically dispatch them to get the trackPageview working? If so, why is the /app_entry_point page the only page that is recorded by Google Analytics?

    Read the article

  • SEO - Problems possibly related to 301 Moved Permanently

    - by ILMV
    Right, here's the story: We have had a website for one of our brands now for many years, the site design was very bad and recently did a complete overhaul, mostly design, but also some of the backend code. The original site was using links such as this example.com/products/item/127 and thus I wanted to change them to be move user friendly, especially to include the product name, the same link now reads example.com/product/127/my-jucy-product/. Since our switch over we have seen our Google results take a beating (we were on the first page for our normal search terms, now we're nearer the 4th!). The other problem we're having is that the links to the old products haven't updated to the new links despite me coding a 301 redirect from old to new. The 301 is not being fired from .htaccess, but in our PHP framework. I had a look at how the site is being loaded from a old link that is still in Google and here's what firebug is reporting: GET <google link> 302 Found GET example.com/products/item/127 302 Found GET example.com/products/item/127 301 Moved Permanently GET example.com/product/127/my-jucy-product/ 302 Found So the Google link has a 302, good. But when the old link comes in our framework is returning a 302! It's only afterwards when it finally hits the right part of the framework does it 301, so here's my question: Is the reason our old links have not changed and our Google Ranking has significantly nose dived because Google is seeing a 302 before the 301? At the time I was reluctant to mess with our .htaccess because it had become pretty complicated and I was under some pretty intense time constraints, now I'm wondering whether this was an incorrect disicion and perhaps I should revisit it. Many thanks! Edit Bugger, just signed up to the Webmaster Tools and I'm getting redirect errors all over the place, hundreds of them! I think this is my problem.

    Read the article

  • create new db in mysql with php syntax

    - by Jacksta
    I am trying to create a new db called testDB2, below is my code. Once running the script all I am getting an error on line 7 Fatal error: Call to undefined function mysqlquery() in /home/admin/domains/domain.com.au/public_html/db_createdb.php on line 7 This is my code <? $sql = "CREATE database testDB2"; $connection = mysql_connect("localhost", "admin_user", "pass") or die(mysql_error()); $result = mysqlquery($sql, $connection) or die(mysql_error()); if ($result) { $msg = "<p>Databse has been created!</p>"; } ?> <HTML> <head> <title>Create MySQL database</title> </head> <body> <? echo "$msg"; ?> </body> </HTML>

    Read the article

  • Parsing/Tokenizing a String Containing a SQL Command

    - by Alan Storm
    Are there any open source libraries (any language, python/PHP preferred) that will tokenize/parse an ANSI SQL string into its various components? That is, if I had the following string SELECT a.foo, b.baz, a.bar FROM TABLE_A a LEFT JOIN TABLE_B b ON a.id = b.id WHERE baz = 'snafu'; I'd get back a data structure/object something like //fake PHPish $results['select-columns'] = Array[a.foo,b.baz,a.bar]; $results['tables'] = Array[TABLE_A,TABLE_B]; $results['table-aliases'] = Array[a=TABLE_A, b=TABLE_B]; //etc... Restated, I'm looking for the code in a database package that teases the SQL command apart so that the engine knows what to do with it. Searching the internet turns up a lot of results on how to parse a string WITH SQL. That's not what I want. I realize I could glop through an open source database's code to find what I want, but I was hoping for something a little more ready made, (although if you know where in the MySQL, PostgreSQL, SQLite source to look, feel free to pass it along) Thanks!

    Read the article

  • Foreach is crashing script, but no errors reported.

    - by ILMV
    So I've created this smarty function to get images from my flickr photostream using SimplePie... simple really, or so it should be. The problem I'm having is the foreach will crash the script, this doesn't happen if I put an exit after the closing foreach, of course because of this the rest of my script doesn't execute. The problem also completely subsides if I remove the foreach, I've tested it and it's not the contents of the foreach, but the loop itself. Error reporting is turned on but I don't get any, I also tried messing with the memory_limit, with no luck. Anyone know why this foreach is killing my script? Thanks! function smarty_function_flickr ($params, &$smarty) { require_once('system/library/SimplePie/simplepie.inc'); require_once('system/library/SimplePie/idn/idna_convert.class.php'); $flickr=new flickr(); /** * Set up SimplePie with all default values using shorthand syntax. */ $feed = new SimplePie($params['feed'], 'system/library/SimplePie/cache', '600'); $feed->handle_content_type(); /** * What sizes should we use? * Choices: square, thumb, small, medium, large. */ $thumb = 'square'; $full = 'medium'; $output = array(); $counter=0; // If I comment this foreach out the problem subsides, I know it is not the code within the foreach foreach ($feed->get_items() as $item) { $url = $flickr->image_from_description($item->get_description()); $output[$counter]['title'] = $item->get_title(); $output[$counter]['image'] = $flickr->select_image($url, $full); $output[$counter]['thumb'] = $flickr->select_image($url, $thumb); $counter++; } // Set template variables and template $smarty->assign('flickr',$output); $smarty->display('forms/'.$params['template'].'.tpl'); }

    Read the article

  • Making one table equal to another without a delete *

    - by Joshua Atkins
    Hey, I know this is bit of a strange one but if anyone had any help that would be greatly appreciated. The scenario is that we have a production database at a remote site and a developer database in our local office. Developers make changes directly to the developer db and as part of the deployment process a C# application runs and produces a series of .sql scripts that we can execute on the remote side (essentially delete *, insert) but we are looking for something a bit more elaborate as the downtime from the delete * is unacceptable. This is all reference data that controls menu items, functionality etc of a major website. I have a sproc that essentially returns a diff of two tables. My thinking is that I can insert all the expected data in to a tmp table, execute the diff, and drop anything from the destination table that is not in the source and then upsert everything else. The question is that is there an easy way to do this without using a cursor? To illustrate the sproc returns a recordset structured like this: TableName Col1 Col2 Col3 Dest Src Anything in the recordset with TableName = Dest should be deleted (as it does not exist in src) and anything in Src should be upserted in to dest. I cannot think of a way to do this purely set based but my DB-fu is weak. Any help would be appreciated. Apologies if the explanation is sketchy; let me know if you need anymore details.

    Read the article

  • Accessing dynamically added control from GridView Footer row

    - by harold-sota
    Hi, I have GridView control in my asp.net page with auto generated fields there is only footer template is present under asp:TemplateField. I can bind this gridview with any of databae table as depend on user selection. Now want to add new record in database so I have added text boxes on footer template cells at runtime depend on number of columns on table. But when I accessing these text boxes from footer template on gridview_RowCommand event its not retriving textbox control. this is the code SGridView.ShowFooter = True For i As Integer = 0 To ctrList.Count Dim ctr As Control = CType(ctrList.Item(i), Control) SGridView.FooterRow.Cells(i + 1).Controls.Add(ctr) Next the ctrList contain Controls textBox, checkbox dropdowlist ect. there is all ok but when i wont to get the text or value or checked value of controls i cant cast the controls in rowcommand event Here is the code: Protected Sub SGridView_RowCommand(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewCommandEventArgs) Handles SGridView.RowCommand If e.CommandName = "Add" Then Dim ctrList As ControlCollection = SGridView.FooterRow.Controls For Each ctr As Control In ctrList If TypeOf ctr Is TextBox Then Dim name As TextBox = CType(ctr, TextBox) Dim val As String = name.Text End If Next End If this excample is for the textBox control. Plese suggest me how can I get footer control textboxes. So that I can save data in database.

    Read the article

  • How do I associate Parameters to Command objects in ADO with VBScript?

    - by Krashman5k
    I have been working an ADO VBScript that needs to accept parameters and incorporate those parameters in the Query string that gets passed the the database. I keep getting errors when the Record Set Object attempts to open. If I pass a query without parameters, the recordset opens and I can work with the data. When I run the script through a debugger, the command object does not show a value for the parameter object. It seems to me that I am missing something that associates the Command object and Parameter object, but I do not know what. Here is a bit of the VBScript Code: ... 'Open Text file to collect SQL query string' Set fso = CreateObject("Scripting.FileSystemObject") fileName = "C:\SQLFUN\Limits_ADO.sql" Set tso = fso.OpenTextFile(fileName, FORREADING) SQL = tso.ReadAll 'Create ADO instance' connString = "DRIVER={SQL Server};SERVER=myserver;UID=MyName;PWD=notapassword; Database=favoriteDB" Set connection = CreateObject("ADODB.Connection") Set cmd = CreateObject("ADODB.Command") connection.Open connString cmd.ActiveConnection = connection cmd.CommandText = SQL cmd.CommandType = adCmdText Set paramTotals = cmd.CreateParameter With paramTotals .value = "tot%" .Name = "Param1" End With 'The error occurs on the next line' Set recordset = cmd.Execute If recordset.EOF then WScript.Echo "No Data Returned" Else Do Until recordset.EOF WScript.Echo recordset.Fields.Item(0) ' & vbTab & recordset.Fields.Item(1) recordset.MoveNext Loop End If The SQL string that I use is fairly standard except I want to pass a parameter to it. It is something like this: SELECT column1 FROM table1 WHERE column1 IS LIKE ? I understand that ADO should replace the "?" with the parameter value I assign in the script. The problem I am seeing is that the Parameter object shows the correct value, but the command object's parameter field is null according to my debugger.

    Read the article

  • rapid application developement tools for very basic GUI apps

    - by Jurij
    I know there are many RAD platforms out there. Infact there are so many that I'm having a hard time finding out which one fits me best. What I want is a RAD tool that would allow me to define a database data model (make DB tables) and then create (view and edit) forms for the various tables. Data input, updating and various queries should be easy and GUI should generate automatically. I'd like to add some additional functionality by coding (such as various complex calculations on the data). I'm a programmer so I'm willing to learn to use a more complete, full-blown RAD solution if you can point me to it (NetBeans and RubyOnRails being the two such frameworks that I'd would probably be high on the list). I'm currently doing Windows Forms logistics apps in .NET. I've actually developed a very crude and basic version of what I need, but I just know that there are solutions out there that are much better and I'd benefit by knowing how to use them. So in short, the basic requirements: * database based data storage (SQLite if possible) * very automated GUI creation * desktop based (as in: not a web app) * extendable by coding * used for creating simple data entry, view & query apps. So basically something like Oracle Forms or DotNetMushroom Rapid Application Developer. But for .NET and SQLite if possible.

    Read the article

  • How to arrange business logic in a Kohana 3 project

    - by Pekka
    I'm looking for advice, tutorials and links at how to set up a mid-sized web application with Kohana 3. I have implemented MVC patterns in the past but never worked against a "formalized" MVC framework so I'm still getting my head around the terminology - toying around with basic examples, building views and templates, and so on. I'm progressing fairly well but I want to set up a real-world web project (one of my own that I've been planning for quite some time now) as a learning object. I learn best by example, but example-based documentation is a bit sparse for Kohana 3 right now - they say so themselves on the site. While I'm not worried about learning the framework as I go along, I want to make sure the code base is healthily structured from the start - i.e. controllers are split nicely, named well and according to standards, and most importantly the business logic is separated into appropriately sized models. My application could, in its core, be described as a business directory with a range of search and listing functions, and a login area for each entry owner. The actual administrative database backend is already taken care of. Supposing I have all the API worked out and in place already - list all businesses, edit business, list businesses by street name, create offer logged in as business, and so on, and I'm just looking for how to fit the functionality into a MVC pattern and into a Kohana application structure that can be easily extended. Do you know real-life examples of "database-heavy" applications like directories, online communities... with a log-in area built on Kohana 3, preferably Open Source so I could take a peek how they do it? Are there conventions or best practices on how to structure an extendable login area for end users in a Kohana project that is not only able to handle a business directory page, but further products on separate pages as well? Do you know any good resources on building complex applications with Kohana? Have you built something similar and could give me recommendations on a project structure?

    Read the article

  • Automation Error when exporting Excel data to SQL Server

    - by brohjoe
    I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB;Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0'," & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing cnt.Close End If End Sub

    Read the article

  • How can I test this SQL Server performance Utility?

    - by Martin Smith
    As part of my MSc I need to do a three month project later this year. I have decided to do something which will likely be useful for me in the workplace and spend the time getting to understand SQL Server internals. The deliverable for this project will be a performance advisor looking at a variety of different rules. Some static such as finding redundant indexes, some more dynamic such as using XEvents to find outlying invocations of stored procedure execution times when certain parameters are passed. I am struggling to come up with a good way of testing this though. I can obviously design a "bad" database and a synthetic workload that my tool will pick up issues on but I also need to demonstrate that it has real world utility. Looking at the self tuning database literature it is common to use TPC benchmarks but I've had a look at the TPCC site and it looks very time consuming to implement and not that good a fit to my project's testing needs in any event (I would still be able to "rig" it by the decisions I made on indexing or physical architecture). Plan A would be to find willing beta tester(s) but in the event that isn't possible I will need a fallback plan. The best idea I have come up with so far is to use the various MS sample applications as examples of real world applications. e.g. http://msftdpprodsamples.codeplex.com/ http://www.asp.net/community/projects/ Does anyone have any better suggestions?

    Read the article

< Previous Page | 725 726 727 728 729 730 731 732 733 734 735 736  | Next Page >