Search Results

Search found 32970 results on 1319 pages for 'zend db select'.

Page 429/1319 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Several Small, Specific, MySQL Query Cache Questions

    - by Robbie
    I've look all over the web and in the questions asked here about MySQL caching and most of them seem very non-specific about a couple of questions that I have about performance and MySQL query caching. Specifically I want answers to these questions, assume for all questions that I have the query cache enabled and it is of type 2, or "DEMAND": Is the query cache per table, per database, or per server? Meaning if I have the cache size set to X and have T tables and D databases will I be caching TX, DX, or X amount of data? If I have table T1 which I regularly use the SQL_CACHE hint on for SELECT queries and table T2 which I never do, when I query T2 with a SELECT query will it check through the cache first before performing the query? *Note: I don't want to use the SQL_NO_CACHE for all T2 queries.* Assume the same situation as in question 2. If I alter (INSERT, DELETE) table T2 will any processing be done on the cache? For answers to 2 and 3, is this processing time negligible if T2 is constantly being altered and is the target of a majority of my SELECT queries?

    Read the article

  • Python: convert buffer type of SQLITE column into string

    - by Volatil3
    I am new to Python 2.6. I have been trying to fetch date datetime value which is in yyyy-mm-dd hh:m:ss format back in my Python program. On checking the column type in Python I get the error: 'buffer' object has no attribute 'decode'. I want to use the strptime() function to split the date data and use it but I can't find how to convert a buffer to string. The following is a sample of my code (also available here): conn = sqlite3.connect("mrp.db.db", detect_types=sqlite3.PARSE_DECLTYPES) cursor = conn.cursor() qryT = """ SELECT dateDefinitionTest FROM t WHERE IDproject = 4 AND IDstatus = 5 ORDER BY priority, setDate DESC """ rec = (4,4) cursor.execute(qryT,rec) resultsetTasks = cursor.fetchall() cursor.close() # closing the resultset for item in resultsetTasks: taskDetails = {} _f = item[10].decode("utf-8") The exception I get is: 'buffer' object has no attribute 'decode'

    Read the article

  • SQL Where Clause Against View

    - by Adam Carr
    I have a view (actually, it's a table valued function, but the observed behavior is the same in both) that inner joins and left outer joins several other tables. When I query this view with a where clause similar to SELECT * FROM [v_MyView] WHERE [Name] like '%Doe, John%' ... the query is very slow, but if I do the following... SELECT * FROM [v_MyView] WHERE [ID] in ( SELECT [ID] FROM [v_MyView] WHERE [Name] like '%Doe, John%' ) it is MUCH faster. The first query is taking at least 2 minutes to return, if not longer where the second query will return in less than 5 seconds. Any suggestions on how I can improve this? If I run the whole command as one SQL statement (without the use of a view) it is very fast as well. I believe this result is because of how a view should behave as a table in that if a view has OUTER JOINS, GROUP BYS or TOP ##, if the where clause was interpreted prior to vs after the execution of the view, the results could differ. My question is why wouldn't SQL optimize my first query to something as efficient as my second query?

    Read the article

  • Can you update/add records in SQL using a datagridview and LINQ to SQL

    - by Jordan S
    Is it possible to bind a DataGridView to a LINQ to SQL class so that when I make changes to the records in the datagridview it automatically updates the SQL database? I have tried binding the data like this but if I make changes to the data in the datagrid view they do not actually affect the data in the database... BOMClassesDataContext DB = new BOMClassesDataContext(); var mfrs = from m in DB.Manufacturers select m; BindingSource bs = new BindingSource(); bs.DataSource = mfrs; dataGridView1.DataSource = bs;

    Read the article

  • How to limit traffic using multicast over localhost

    - by Shane Holloway
    I'm using multicast UDP over localhost to implement a loose collection of cooperative programs running on a single machine. The following code works well on Mac OSX, Windows and linux. The flaw is that the code will receive UDP packets outside of the localhost network as well. For example, sendSock.sendto(pkt, ('192.168.0.25', 1600)) is received by my test machine when sent from another box on my network. import platform, time, socket, select addr = ("239.255.2.9", 1600) sendSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) sendSock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 24) sendSock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_IF, socket.inet_aton("127.0.0.1")) recvSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) recvSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True) if hasattr(socket, 'SO_REUSEPORT'): recvSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEPORT, True) recvSock.bind(("0.0.0.0", addr[1])) status = recvSock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, socket.inet_aton(addr[0]) + socket.inet_aton("127.0.0.1")); while 1: pkt = "Hello host: {1} time: {0}".format(time.ctime(), platform.node()) print "SEND to: {0} data: {1}".format(addr, pkt) r = sendSock.sendto(pkt, addr) while select.select([recvSock], [], [], 0)[0]: data, fromAddr = recvSock.recvfrom(1024) print "RECV from: {0} data: {1}".format(fromAddr, data) time.sleep(2) I've attempted to recvSock.bind(("127.0.0.1", addr[1])), but that prevents the socket from receiving any multicast traffic. Is there a proper way to configure recvSock to only accept multicast packets from the 127/24 network, or do I need to test the address of each received packet?

    Read the article

  • Rails timezone differences between Time and DateTime

    - by kjs3
    I have the timezone set. config.time_zone = 'Mountain Time (US & Canada)' Creating a Christmas event from the console... c = Event.new(:title = "Christmas") using Time: c.start = Time.new(2012,12,25) = 2012-12-25 00:00:00 -0700 #has correct offset c.end = Time.new(2012,12,25).end_of_day = 2012-12-25 23:59:59 -0700 #same deal using DateTime: c.start = DateTime.new(2012,12,25) = Tue, 25 Dec 2012 00:00:00 +0000 # no offset c.end = DateTime.new(2012,12,25).end_of_day = Tue, 25 Dec 2012 23:59:59 +0000 # same I've carelessly been using DateTime thinking that input was assumed to be in the config.time_zone but there's no conversion when this gets saved to the db. It's stored just the same as the return values (formatted for the db). Using Time is really no big deal but do I need to manually offset anytime I'm using DateTime and want it to be in the correct zone?

    Read the article

  • Linked Server related

    - by rmdussa
    I have two instances of SQL Server: Server1 (SQL Server 2008) Server2 (SQL Server 2005) I am executing a stored procedure from Server1 which references tables on Server2. It is working fine in my test environment: Server1 runs Vista SP2, SQL Server 2008; Server2 runs Windows XP SP2, SQL Server 2005. However, it is not working in the production environment: Server1 runs Vista SP1, SQL Server 2008; Server2 runs Windows XP SP2, SQL Server 2005. The error message I receive is: OLE DB provider "SQLNCLI10" for linked server "Server2" returned message "No transaction is active.". Msg 7391, Level 16, State 2, Line 21 The operation could not be performed because OLE DB provider "SQLNCLI10" for linked server "Server2" was unable to begin a distributed transaction.

    Read the article

  • Force Oracle error on fetch

    - by Dan
    I am trying to debug a strange behavior in my application. In order to do so, I need to reproduce a scenario where an SQL SELECT query will throw an error, but only while actually fetching from the cursor, not while executing the query itself. Can this be done? Any error will do, but ORA-01722: invalid number seems like the obvious one to try. I created a table with the follwing: KEYCOL INTEGER PRIMARY KEY OTHERCOL VARCHAR2(100) I then created a few hundred rows with unique values for the primary key and the value l for the othercol. I then ran a SELECT * query, picked a row somewhere in the middle, and updated it to the string abcd. I ran the query SELECT KEYCOL, TO_NUMBER(OTHERCOL) FROM SOMETABLE hoping to get some rows of good data an then an error later. But I keep getting ORA-01722: invalid number on the execute step itself. I have gotten this behavior programmatically using ADO (with server-side cursor) and JDBC, as well as from PL/SQL Developer. How can I get the result I'm looking for? thanks Edit - meant to add, when using ADO, I am only calling Command.Execute. I am not creating or opening a Recordset.

    Read the article

  • Rspec-rails doesn't seem to find my models

    - by sa125
    Hi - I'm trying out rspec, and immediately hit a wall when it doesn't seem to load db records I know exist. Here's my fairly simple spec (no tests yet). require File.expand_path(File.dirname(__FILE__) + '../spec_helper') describe SomeModel do before :each do @user1 = User.find(1) @user2 = User.find(2) end it "should do something fancy" end I get an ActiveRecord::RecordNotFound exception, saying it couldn't find User w/ ID=1 or ID=2, which I know for a fact exist. I set both test and development databases to point to the same schema in database.yml, so this shouldn't be database mixup. I also ran script/generate rspec after installing the gems (rspec, rspec-rails), and gem.config both environment.rb and test.rb. Any idea what I'm missing? thanks. EDIT Seems I was running the tests with rake spec:models, which emptied the db and thus no records were found. When I used % spec spec/models/some_model_spec.rb, everything worked as expected.

    Read the article

  • error about django to connecct newes mssql2008

    - by qq263020776
    I am using django-mssql and SQL Server 2008, but I found that it always errors when I do some commands,for example: python manage.py syncdb the error is below: raise OperationalError(e, "Error opening connection: " + connection_string) ngo.db.backends.sqlserver_ado.dbapi.OperationalError: (com_error(-2147352567, xb7\xa2\xc9\xfa\xd2\xe2\xcd\xe2\xa1\xa3', (0, u'Microsoft OLE DB Provider for L Server', u'[DBNETLIB][ConnectionOpen (Connect()).]SQL Server \u4e0d \u5b58\u 8\u6216\u62d2\u7edd\u8bbf\u95ee\u3002', None, 0, -2147467259), None), 'Error ning connection: PROVIDER=SQLOLEDB;DATA SOURCE=115.238.106.100,60433;Network rary=DBMSSOCN;Initial Catalog=rvdb_1;UID=sa;PWD=xxxx') When I use Microsoft SQL Server Management studio client, I can successfully connect the database. I got some infomation from: http://code.google.com/p/django-mssql/issues/detail?id=76 but I still tried I got wrong and I think the solution provided is wrong.

    Read the article

  • simple jquery fetch from mysql

    - by JPro
    I am trying to use jQuery with MYSQL and I wrote something like this : <html> <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script> <script> function example_ajax_request() { $('#example-placeholder').html('<p>Loading results ... <img src="ajax-loader.gif" /></p>'); $('#example-placeholder').load("loadres.php"); } </script> </head> <body> <div id="query"> <select name="show" id="box" > <option value="0">Select A Test</option> <option value="All">--All--</option> <option value="M1">Model1</option> </select> <input type="button" onclick="example_ajax_request()" value="Click Me!" /> </div> <div id="example-placeholder"> <p>Placeholding text</p> </div></body> </html> Basically I want to pass parameters to the loadres.php file. But unable to figure out the exact way to do. Any help is appreciated. Thanks.

    Read the article

  • Can Sql Server 2005 Pivot table have nText passed into it?

    - by manemawanna
    Right bit of a simple question can I input nText into a pivot table? (SQL Server 2005) What I have is a table which records the answers to a questionnaire consisting of the following elements for example: UserID QuestionNumber Answer Mic 1 Yes Mic 2 No Mic 3 Yes Ste 1 Yes Ste 2 No Ste 3 Yes Bob 1 Yes Bob 2 No Bob 3 Yes With the answers being held in nText. Anyway what id like a Pivot table to do is: UserID 1 2 3 Mic Yes No Yes Ste Yes No Yes Bob Yes No Yes I have some test code, that creates a pivot table but at the moment it just shows the number of answers in each column (code can be found below). So I just want to know is it possible to add nText to a pivot table? As when I've tried it brings up errors and someone stated on another site that it wasn't possible, so I would like to check if this is the case or not. Just for further reference I don't have the opportunity to change the database as it's linked to other systems that I haven't created or have access too. Heres the SQL code I have at present below: DECLARE @query NVARCHAR(4000) DECLARE @count INT DECLARE @concatcolumns NVARCHAR(4000) SET @count = 1 SET @concatcolumns = '' WHILE (@count <=52) BEGIN IF @COUNT > 1 AND @COUNT <=52 SET @concatcolumns = (@concatcolumns + ' + ') SET @concatcolumns = (@concatcolumns + 'CAST ([' + CAST(@count AS NVARCHAR) + '] AS NVARCHAR)') SET @count = (@count+1) END DECLARE @columns NVARCHAR(4000) SET @count = 1 SET @columns = '' WHILE (@count <=52) BEGIN IF @COUNT > 1 AND @COUNT <=52 SET @columns = (@columns + ',') SET @columns = (@columns + '[' + CAST(@count AS NVARCHAR) + '] ') SET @count = (@count+1) END SET @query = ' SELECT UserID, ' + @concatcolumns + ' FROM( SELECT UserID, QuestionNumber AS qNum from QuestionnaireAnswers where QuestionnaireID = 7 ) AS t PIVOT ( COUNT (qNum) FOR qNum IN (' + @columns + ') ) AS PivotTable' select @query exec(@query)

    Read the article

  • Re-using aggregate level formulas in SQL - any good tactics?

    - by Cade Roux
    Imagine this case, but with a lot more component buckets and a lot more intermediates and outputs. Many of the intermediates are calculated at the detail level, but a few things are calculated at the aggregate level: DECLARE @Profitability AS TABLE ( Cust INT NOT NULL ,Category VARCHAR(10) NOT NULL ,Income DECIMAL(10, 2) NOT NULL ,Expense DECIMAL(10, 2) NOT NULL ) ; INSERT INTO @Profitability VALUES ( 1, 'Software', 100, 50 ) ; INSERT INTO @Profitability VALUES ( 2, 'Software', 100, 20 ) ; INSERT INTO @Profitability VALUES ( 3, 'Software', 100, 60 ) ; INSERT INTO @Profitability VALUES ( 4, 'Software', 500, 400 ) ; INSERT INTO @Profitability VALUES ( 5, 'Hardware', 1000, 550 ) ; INSERT INTO @Profitability VALUES ( 6, 'Hardware', 1000, 250 ) ; INSERT INTO @Profitability VALUES ( 7, 'Hardware', 1000, 700 ) ; INSERT INTO @Profitability VALUES ( 8, 'Hardware', 5000, 4500 ) ; SELECT Cust ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Cust SELECT Category ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Category SELECT Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability Notice how the same formulae have to be used at the different aggregation levels. This results in code duplication. I have thought of using UDFs (either scalar or table valued with an OUTER APPLY, since many of the final results may share intermediates which have to be calculated at the aggregate level), but in my experience the scalar and multi-statement table-valued UDFs perform very poorly. Also thought about using more dynamic SQL and applying the formulas by name, basically. Any other tricks, techniques or tactics to keeping these kinds of formulae which need to be applied at different levels in sync and/or organized?

    Read the article

  • Error connecting to SQL Server 2008 with Django

    - by qq263020776
    I am using django-mssql and SQL Server 2008, but I found that it always errors when I do some commands,for example: python manage.py syncdb the error is below: raise OperationalError(e, "Error opening connection: " + connection_string) ngo.db.backends.sqlserver_ado.dbapi.OperationalError: (com_error(-2147352567, xb7\xa2\xc9\xfa\xd2\xe2\xcd\xe2\xa1\xa3', (0, u'Microsoft OLE DB Provider for L Server', u'[DBNETLIB][ConnectionOpen (Connect()).]SQL Server \u4e0d \u5b58\u 8\u6216\u62d2\u7edd\u8bbf\u95ee\u3002', None, 0, -2147467259), None), 'Error ning connection: PROVIDER=SQLOLEDB;DATA SOURCE=115.238.106.100,60433;Network rary=DBMSSOCN;Initial Catalog=rvdb_1;UID=sa;PWD=xxxx') When I use Microsoft SQL Server Management studio client, I can successfully connect the database. I got some infomation from: http://code.google.com/p/django-mssql/issues/detail?id=76 but I still tried I got wrong and I think the solution provided is wrong.

    Read the article

  • Full text index requires dropping and recreating - why?

    - by Amjid Qureshi
    Hi all, So I've got a web app running on .net 3.5 connected to a SQL 2005 box. We do scheduled releases every 2 weeks. About 14 tables out of 250 are full text indexed. After not every release, but a few too many, the indexes crap out. They seem to have data in there, but when we try to search them from the front end or SQL enterprise we get timeouts/hangs. We have a script that disables the indexes, drops them, deletes the catalog and then re creates the indexes. This fixes the problem 99 times out of 100. and the one other time, we run the script again and it all works We have tried just rebuilding the fulltext index but that doesn't fix the issue. My question is why do we have to do this ? what can we do to sort the index out? Here is a bit of the script, IF EXISTS (SELECT * FROM sys.fulltext_indexes fti WHERE fti.object_id = OBJECT_ID(N'[dbo].[Address]')) ALTER FULLTEXT INDEX ON [dbo].[Address] DISABLE GO IF EXISTS (SELECT * FROM sys.fulltext_indexes fti WHERE fti.object_id = OBJECT_ID(N'[dbo].[Address]')) DROP FULLTEXT INDEX ON [dbo].[Address] GO IF EXISTS (SELECT * FROM sysfulltextcatalogs ftc WHERE ftc.name = N'DbName.FullTextCatalog') DROP FULLTEXT CATALOG [DbName.FullTextCatalog] GO -- may need this line if we get an error BACKUP LOG SMS2 WITH TRUNCATE_ONLY CREATE FULLTEXT CATALOG [DbName.FullTextCatalog] ON FILEGROUP [FullTextCatalogs] IN PATH N'F:\Data' AS DEFAULT AUTHORIZATION [dbo] CREATE FULLTEXT INDEX ON [Address](CommonPlace LANGUAGE 'ENGLISH') KEY INDEX PK_Address ON [DbName.FullTextCatalog] WITH CHANGE_TRACKING AUTO go

    Read the article

  • Adding marker to the retrieved location

    - by Rahul Varma
    I have displayed the map in my app by using the following code. I have retrieved info from the database and displayed the map. Now i want to add marker to the retrieved location... googleMao.java public class googleMap extends MapActivity{ private MapView mapView; private MapController mc; GeoPoint p; long s; Cursor cur; SQLiteDatabase db; createSqliteHelper csh; String qurry; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.map); // String qurry=getIntent().getStringExtra("value"); //here is calling the map string qurry s = getIntent().getLongExtra("value",2); map(); mapView = (MapView) findViewById(R.id.mapview1); LinearLayout zoomLayout = (LinearLayout)findViewById(R.id.zoom); View zoomView = mapView.getZoomControls(); zoomLayout.addView(zoomView, new LinearLayout.LayoutParams( LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT)); //mapView.displayZoomControls(true); mapView.setBuiltInZoomControls(true); mc = mapView.getController(); String coordinates[] = {"1.352566007", "103.78921587"}; double lat = Double.parseDouble(coordinates[0]); double lng = Double.parseDouble(coordinates[1]); Geocoder geoCoder = new Geocoder(this, Locale.getDefault()); try { List<Address> addresses = geoCoder.getFromLocationName(qurry,5); String add = ""; if (addresses.size() > 0) { p = new GeoPoint( (int) (addresses.get(0).getLatitude() * 1E6), (int) (addresses.get(0).getLongitude() * 1E6)); mc.animateTo(p); mapView.invalidate(); mc.setZoom(6); } } catch (IOException e) { e.printStackTrace(); } } @Override protected boolean isRouteDisplayed() { // Required by MapActivity return false; } public void map() { String[] str={"type"}; int[] i={R.id.type}; csh=new createSqliteHelper(this); db=csh.getReadableDatabase(); cur=db.rawQuery("select type from restaurants where _id="+s,null); if(cur.moveToFirst()) { qurry=cur.getString(cur.getColumnIndex("type")); } } }

    Read the article

  • forward slash problem in xsl ams xsql

    - by Peter Kaleta
    Hi I have simple xsql <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="zad1.xsl" ?> <page xmlns:xsql="urn:oracle-xsql" connection="java:comp/env/jdbc/mondialDS"> <xsql:query max-rows="-1" null-indicator="no" tag-case="lower" rowset-element="continents"> select name as continent from mondial_user.Continent order by 1 </xsql:query> </page> which gives me a list of continents with "australia/oceania" among them i use XSL on above xsql : <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <!-- Root template --> <res> <xsl:template match="/continents"> <xsl:for-each select="row"> <re> <xsl:value-of select="continent"/> </re> </xsl:for-each> </xsl:template> </res> </xsl:stylesheet> ANd firefox throws error : on wrong formated xml document with : AfricaAmericaAsiaAustralia/OceaniaEurope -----------------------------------^ Help apreciated

    Read the article

  • Rake don't know how to build task?

    - by Schroedinger
    Using a rake task to import data into a database: file as follows namespace :db do desc "load imported data from csv" task :load_csv_data => :environment do require 'fastercsv' require 'chronic' FasterCSV.foreach("input.csv", :headers => true) do |row| Trade.create( :name => row[0], :type => row[4], :price => row[6].to_f, :volume => row[7].to_i, :bidprice => row[10].to_f, :bidsize => row[11].to_i, :askprice => row[14].to_f, :asksize => row[15].to_i ) end end end When attempting to use this, with the right CSV files and the other elements in place, it will say Don't know how to build task 'db:import_csv_data' I know this structure works because I've tested it, I'm just trying to get it to convert to the new values on the fly. Suggestions?

    Read the article

  • Flex: FileReference and Image unhandled IOErrorEvent

    - by deux11
    The following code shows a button that allows you to select a file (should be an image) and display it into an image component. When I select an invalid image (e.g. a word document), I get the following error: "Error #2044: Unhandled IOErrorEvent:. text=Error #2124: Loaded file is an unknown type." I know I can pass a FileFilter to the FileReference:browse call, but that's beyond the point. My question is... I want to handle the IOErrorEvent myself, what event listener am I missing? private var file:FileReference = new FileReference(); private function onBrowse():void { file.browse(null); file.addEventListener(Event.SELECT, handleFileSelect); file.addEventListener(Event.COMPLETE, handleFileComplete); } private function handleFileSelect(event:Event):void { file.load(); } private function handleFileComplete(event:Event):void { myImage.source = file.data; } private function handleImageIoError(evt:IOErrorEvent):void { Alert.show("IOErrorEvent"); } <mx:Button click="onBrowse()" label="Browse"/> <mx:Image id="myImage" width="100" height="100" ioError="handleImageIoError(event)"/>

    Read the article

  • What is best practice with SQLite and Android ?

    - by PHP_Jedi
    What is considered "best practice" when executing queries on a sql-lite db within an android app. Is it safe to run inserts, deletes and select queries from an AsyncTask's doInBackground ? Or should I use the UI Thread ? I suppose that db queries can be "heavy" and should not use the UI thread as it can lock up the app - resulting in an ANR. If I have several AsyncTasks, should they share a connection or should they open a connection each ? Any best practices in this area on android?

    Read the article

  • Python utf-8 decoding issue with hashlib.digest() method

    - by Sorw
    Hello StackOverflow community, Using Google App Engine, I wrote a keyToSha256() method within a model class (extending db.Model) : class Car(db.Model): def keyToSha256(self): keyhash = hashlib.sha256(str(self.key())).digest() return keyhash When displaying the output (ultimately within a Django template), I get garbled text, for example : ?????_??!`?I?!?;?QeqN??Al?'2 I was expecting something more in line with this : 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 Am I missing something important ? Despite reading several guides on ASCII, Unicode, utf-8 and the like, I think I'm still far from mastering the secrets of string encoding/decoding. After browsing StackOverflow and searching for insights via Google, I figured out I should ask the question here. Any idea ? Thanks !

    Read the article

  • Problem with detecting the value of the drop down list on the server (servlet) side

    - by Harry Pham
    Client code is pretty simple: <form action="DDServlet" method="post"> <input type="text" name="customerText"> <select id="customer"> <option name="customerOption" value="3"> Tom </option> <option name="customerOption" value="2"> Harry </option> </select> <input type="submit" value="send"> </form> Here is the code on the Servlet Enumeration paramNames = request.getParameterNames(); while(paramNames.hasMoreElements()){ String paramName = (String)paramNames.nextElement(); //get the next element System.out.println(paramName); } When I print out, I only see, customerText, but not customerOption. Any idea why guys? What I hope is, if I select Tom in my option, once I submit, on my servlet I should able to do this: String paramValues[] = request.getParameterValues(paramName); and get back value of 3

    Read the article

  • Need Help with SQL Subquery

    - by Pete Augello
    Hey: I am trying to write a query that will return all orders that only have a Subscription included. It is easy enough to write a query that includes all Orders with Subscriptions, another that includes all orders without a Subscription and then compare them with an unmatched query. But I don't want to have to store Queries in my Access database, I prefer to have it all in my ASP code, and I can't get this to work with just one complex query. Here are samples of what works if I store them: Query1 SELECT tblOrders.OrderID, tblOrderItems.ProductID FROM tblOrders INNER JOIN tblOrderItems ON tblOrders.OrderID = tblOrderItems.OrderID WHERE ((Not ((tblOrderItems.ProductID)=12 And (tblOrderItems.ProductID)<=15))); Query2 SELECT tblOrders.OrderID, tblOrderItems.ProductID FROM tblOrders INNER JOIN tblOrderItems ON tblOrders.OrderID = tblOrderItems.OrderID WHERE ((((tblOrderItems.ProductID)=12 And (tblOrderItems.ProductID)<=15))); Query3 SELECT Query2.OrderID, Query2.ProductID FROM Query2 LEFT JOIN Query1 ON Query2.OrderID = Query1.OrderID WHERE (((Query1.OrderID) Is Null)); So, my question is 'how do I write Query3 so that it doesn't refer to Query1 or Query2?' or, am I missing some other way do do this? Thanks, Pete [email protected]

    Read the article

  • Flex TileList control, image loading issue

    - by ckenan
    I have a flex 3 TileList in wich a load several image (employee's headshot pictures). The image I'm loading in the TileList are stored in a DataBase (I use the ByteArray class and a Base 64 encoding to store the images in the DB). When I load the images in the TileList from the DB, there is no problem they are displayed correctly, but when I scroll down in the TileList and scroll up again, the position of the images is changing, so for example the image in first position can be now in the 3rd and so on .... Does somebody knows how to fix that ? Thanks in advance! PS : Here is the code of the ItemRenderer for the TileList private function init():void { img.load(data.imageData); } ]]

    Read the article

  • Can someone help me refactor this C# linq business logic for efficiency?

    - by Russell
    I feel like this is not a very efficient way of using linq. I was hoping somebody on here would have a suggestion for a refactor. I realize this code is not very pretty, as I was in a complete rush. public class Workflow { public void AssignForms() { using (var cntx = new ProjectBusiness.Providers.ProjectDataContext()) { var emplist = (from e in cntx.vw_EmployeeTaskLists where e.OwnerEmployeeID == null select e).ToList(); foreach (var emp in emplist) { // if employee has a form assigned: break; if (emp.GRADE > 15 || (emp.Pay_Plan.ToLower().Contains("al") || emp.Pay_Plan.ToLower().Contains("ex"))) { //Assign278(); } else if ((emp.Series.Contains("0905") || emp.Series.Contains("0511") || emp.Series.Contains("0110") || emp.Series.Contains("1801")) || (emp.GRADE >= 12 && emp.GRADE <= 15)) { var emptask = new ProjectBusiness.Providers.EmployeeTask(); emptask.TimespanID = cntx.Timespans.SingleOrDefault(t => t.BeginDate.Year == DateTime.Today.Year & t.EndDate.Year == DateTime.Today.Year).TimespanID; var FormID = (from f in cntx.Forms where f.FormName.Contains("450") select f.FormID).FirstOrDefault(); var TaskStatusID = (from s in cntx.TaskStatus where s.StatusDescription.ToLower() == "not started" select s.TaskStatusID).FirstOrDefault(); Assign450((int)emp.EmployeeID, FormID, TaskStatusID, emptask); cntx.EmployeeTasks.InsertOnSubmit(emptask); } else { //Assign185(); } } cntx.SubmitChanges(); } } private void Assign450(int EmployeeID, int FormID, int TaskStatusID, ProjectBusiness.Providers.EmployeeTask emptask) { emptask.FormID = FormID; emptask.OwnerEmployeeID = EmployeeID; emptask.AssignedToEmployeeID = EmployeeID; emptask.TaskStatusID = TaskStatusID; emptask.DueDate = DateTime.Today; } }

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >