Search Results

Search found 41565 results on 1663 pages for 'sql xml'.

Page 510/1663 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • How can I fake sql data while preserving statements without commenting my server-side code?

    - by Fedor
    I have to use hardcoded values for certain fields because at this moment we don't have access to the real data. When we do get access, I don't want to go through a lot of work uncommenting. Is it possible to keep this statement the way it is, except use '25' as the alias for ratecode? IF(special.ratecode IS NULL, br.ratecode, special.ratecode) AS ratecode, I have about 8 or so IF statements similar to this and I'm just too lazy ( even with vim ) to re-append while commenting out each if statement line by line. I would have to do this: $sql = 'SELECT u.*,'; // IF ( special.ratecode IS NULL, br.ratecode, special.ratecode) AS ratecode $sql.= '25 AS ratecode';

    Read the article

  • Is there an alternative of using a VBScript for enabling FileStream after SQL Server installation?

    - by user193655
    To use Filestream on a DB 3 steps must be done: 1) enable it a server/instance level 2) enable it (sp_configure) at DB level 3) create a varbinary(max) field that supports filestream (2) and (3) are done easily with T-SQL (1) is doable manually from SQL Server Configuration Manager, basically what I need is to check all the 3 checkboxes: but how is it possible to automize it? I found this artcile "Enabling filestream usin a VBScript", is there another way to do it than using VBScripts? May be something that is possible to do only with 2008R2? In case it VBScript is the only solution, which are the possible downsides?

    Read the article

  • Can a SQL Server 2008 database support both a REST and SOAP web services within two different endpoints?

    - by PaulDecember
    Say you have a SQL Server 2008 database. You build a SOAP web service. You then deploy or publish this using Visual Studio 2010 in one website. Now, using the same database, you build a REST web service, in a different solution. You deploy this on another website. Can you consume the endpoints and/or .svc file of both the SOAP and REST web services, though they reference the same SQL Server 2008 database? I don't see why not, but before I go down this path and spend days I'd like to make sure. Also if there's a performance hit to the database if it is running both SOAP and REST at the same time--again, I don't see why it would matter, but I must make sure. Thanks.

    Read the article

  • How to log the raw SQL from Oracle occi C++ api?

    - by savanna
    One of our customers is complaining our application is not working. Their reasoning is that our sql function call to their Oracle database is not getting the "expected" result. Sometime, it should failed but our application get success from their database. It's really frustrating because it's their database and we cannot do any test on it. We are using the C++ Oracle OCCI API. Is there anyway we can log the raw sql from our end? That will be very helpful and we can ship the script to them and let them debug in their system to figure out the problem. Thanks in advance.

    Read the article

  • How to Merge two databases in one in MS SQL Server 2008?

    - by SzamDev
    Hi I have 2 PCs, each one of them has MS SQL Server 2008 installed on it and there is a database with data in it. I need a way that I can move data in my DB from this MS SQL Server to another one (another PC which has the same DB) move data from one PC to another one - There is one proplem, the ID column, because each DB in my 2 PCs has data in it so this column counts from 1,2,3,....... ( data will be conflict with other data in my DB ) Is there any way to solve my proplem and move data successfully?

    Read the article

  • How to make UPDATE queries in LINQ to SQL?

    - by Alex
    I like using LINQ to SQL. The only problem is that I don't like the default way of updating tables. Let's say I have the following table with the following columns: ID (primary key), value1, value2, value3, value4, value5 When I need to update something I call UPDATE ... WHERE ID=@id LINQ to SQL call UPDATE ... WHERE ID=@id and value1=@value1 and value2=@value2 and value3=@value3 and value4=@value4 and value5=@value5 I can override this behavior by adding UpdateCheck=UpdateCheck.Never to every column, but with every update of the DataContext class with the GUI, this will be erased. Is there any way to tell LINQ to use this way of updating data?

    Read the article

  • how to get the second batch and 3rd batch in the same query result in oracle sql + yii framework?

    - by sasori
    let' say i have 20 results in the sql query. if am gonna use the limit in the yii active record, I'll obviously get the first four from the result, but what if i wanna get the 2nd four and then 3rd four in the same query result ? how to query that via sql ? e.g $criteria2 = new CDbCriteria(); $criteria2->select = 'USERID, ADID ,ADTYPE, ADTITLE, ADDESC, PAGEVIEW, DISPPUBLISHDATE'; $criteria2->addCondition("STATUS = 1"); $criteria2->order = '"t".PAGEVIEW DESC,"t".PUBLISHDATE DESC'; $criteria2->limit = 4; $criteria2->with = array('subcat','adimages'); $result = $this->findAll($criteria2); return $result;

    Read the article

  • Microsoft sort SQL Server 2014 CTP 2 et vante ses nouvelles capacités In-Memory permettant d'accélérer 30 fois les performances

    Microsoft sort SQL Server 2014 CTP 2 et vante ses nouvelles capacités In-Memory permettant d'accélérer 30 fois les performancesMicrosoft a profité de son salon Pass Summit 2013 dédié à SQL Server pour dévoiler la CTP 2 de sa plateforme de gestion de données moderne SQL Server 2014.SQL Server 2014 est conçu autour de trois objectifs majeurs : offrir un système de base de données « In-Memory », de nouvelles capacités Cloud pour simplifier l'adoption du Cloud Computing pour les bases de données SQL...

    Read the article

  • 2 Servers setup for redundency, backup

    - by minal
    I presently have 1 dedicated virtual server running my website/blog/mail, etc. This is on Hyper-V with 512MB RAM. Windows Web2008. With the VM, I have these running within it: SmarterMail – for emails MS DNS – I have my own nameservers on this server SQL Express IIS7 2 IP Address I have now leased 2 physical servers : P4 2.6Ghz 1GB RAM 80GB HDD. With these new servers, I get 2 IPs per server as well. These are running Windows 2008 Standard. With the VM the HDD was obviously on a RAID setup so I was not worried about hardware issues as it fell on the provider to manage. However, with the new servers the HDD is not RAID’d, hence my concern is that if it fails I need a backup position. What would be the most ideal setup to go for? I am thinking: Server 1: (Web/PrimaryDNS) DNS – NS1 SQL Express – OFF turn on when required, ie. Server2 is down SmarterMail – OFF turn on when required, ie. Server2 is down IIS 7 Server2:(SQL/Backup) DNS – NS2 SQL Web Edition SmarterMail IIS 7 How can I set it up so that if 1 goes down I can have everything on 2 instantly or by manual switching over. I am confused as other DNS servers will cache the web servers IP address for requests, and if that server goes down, the backup server will have a different IP. How do I make this work? I will be doing routine backups, in which case I will keep copies of backups on both servers. If I am copying the same stuff on both servers like a mirror then I am losing on using the true performance out of it. It's like 1 server is always on standby. Ideally I want SQL and web on 2 diff machines for best performance. If Server1 goes down, I should be able to switch to Server2 fairly easily. I don't have a problem with manual intervention to start the sql/mail services, etc. In terms of scalabilty, the VM has coped pretty well to date. Moving forward the SQL and IIS workload is going to double pretty quickly. Some ideas would be great.

    Read the article

  • SSMS Tools Pack 2.1.0 is out. Added support for SQL Server 2012 RC0.

    - by Mladen Prajdic
    This version adds support for SQL Server 2012 RC0 and fixes a few bugs with SQL History. Because of the support for regions in SSMS 2012 the regions and debug sections feature has been removed from SSMS Tools Pack for SQL Server 2012. The feature is still available for previous SSMS versions. In other news SSMS Tools Pack has won the SQL Magazine bronze award for best free tool of 2011. You can view all the details at the SQL Server Magazine Award page. Thanx to all the people who voted for it. I'm glad you all like it and use it with great success. Also I've added a possibility for you to subscribe to email notifications in case the auto-updater doesn't work for you for some reason like being behind a proxy. Enjoy it!

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • NLOG to output db.out

    - by Coppermill
    I would like to use nLog to output my LINQ to SQL generated SQL to the log file e.g. db.Log = Console.Out reports the generated SQL to the console, http://www.bryanavery.co.uk/post/2009/03/06/Viewing-the-SQL-that-is-generated-from-LINQ-to-SQL.aspx How can I get the log to log to NLOG?

    Read the article

  • BugZilla XML RPC Interface

    - by Damo
    I am attempting to setup BugZilla to receive but reports from another system using the XML-RPC interface. BugZilla works fine on its own with its own interface. When I attempt to test the XML-RPC functionality by accessing "xmlrpc.cgi" in my browser I get the error: The XML-RPC Interface feature is not available in this Bugzilla at C:\BugZilla\xmlrpc.cgi line 27 main::BEGIN(...) called at C:\BugZilla\xmlrpc.cgi line 29 eval {...} called at C:\BugZilla\xmlrpc.cgi line 29 Following this I install test-taint package from the default perl repository, this installs version 1.04. Re-running "xmlrpc.cgi" gives me an IIS error: 502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. So I run the checksetup.pl which inform me that: Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Installing Test-Taint from CPAN is the same. I assume XML-RPC is relent on Test-Taint, but Test-Taint doesn't seem to run correctly. If I ignore this error and attempt to invoke "bz_webservice_demo.pl" to add an entry the script times out. How can I get the XML-RPC / Test-Taint function working ? Current Setup: IIS7.5 on Windows Server 2008 Bugzilla 4.2.2 Perl 5.14.2 C:\BugZilla>perl checksetup.pl Set up gcc environment - 3.4.5 (mingw-vista special r3) * This is Bugzilla 4.2.2 on perl 5.14.2 * Running on Win2008 Build 6002 (Service Pack 2) Checking perl modules... Checking for CGI.pm (v3.51) ok: found v3.59 Checking for Digest-SHA (any) ok: found v5.62 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.76 Checking for DateTime-TimeZone (v0.79) ok: found v1.48 Checking for DBI (v1.614) ok: found v1.622 Checking for Template-Toolkit (v2.22) ok: found v2.24 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.904) ok: found v1.911 Checking for URI (v1.37) ok: found v1.59 Checking for List-MoreUtils (v0.22) ok: found v0.33 Checking for Math-Random-ISAAC (v1.0.1) ok: found v1.004 Checking for Win32 (v0.35) ok: found v0.44 Checking for Win32-API (v0.55) ok: found v0.64 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.18.1 Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for DBD-SQLite (v1.29) ok: found v1.33 Checking for DBD-Oracle (v1.19) ok: found v1.30 The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.46 Checking for Chart (v2.1) ok: found v2.4.5 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for MIME-tools (v5.406) ok: found v5.503 Checking for libwww-perl (any) ok: found v6.02 Checking for XML-Twig (any) ok: found v3.41 Checking for PatchReader (v0.9.6) ok: found v0.9.6 Checking for perl-ldap (any) ok: found v0.44 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.20 Checking for SOAP-Lite (v0.712) ok: found v0.715 Checking for JSON-RPC (any) ok: found v0.96 Checking for JSON-XS (v2.0) ok: found v2.32 Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.67) ok: found v3.68 Checking for HTML-Scrubber (any) ok: found v0.09 Checking for Encode (v2.21) ok: found v2.44 Checking for Encode-Detect (any) not found Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found Checking for Apache-SizeLimit (v0.96) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * Encode-Detect * Automatic charset detection for text attachments * * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * * Apache-SizeLimit * mod_perl * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: Encode-Detect: ppm install Encode-Detect TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Apache-SizeLimit: ppm install Apache-SizeLimit Reading ./localconfig... OPTIONAL NOTE: If you want to be able to use the 'difference between two patches' feature of Bugzilla (which requires the PatchReader Perl module as well), you should install patchutils from: http://cyberelk.net/tim/patchutils/ Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for MySQL (v5.0.15) ok: found v5.5.27 WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. Removing existing compiled templates... Precompiling templates...done. checksetup.pl complete.

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • Store XML data in Core Data

    - by ct2k7
    Hi, is there any easy way of store XML data into core data? Currently, my app just pulls the values from the XML file directly, however, this isn't efficient for XML files which holds over 100 entries, thus storing the data in Core Data would be the best option. XML file is called/downloaded/parsed ever time the app opens. With the Core Data, the XML data would be downloaded ever 3600 seconds or so, and refresh the current data in the core data, to reduce the loading time when opening the app. Any ideas on how I can do this? Having reviewed the developer documentation, it doesn't look very tasty.

    Read the article

  • different location of AndroidManifest.xml

    - by didito
    i have a custom buildchain for an android project. there is a build.xml and build.properties. build.properties contains this line: manifest.file =${env.WORK_FOLDER}/AndroidManifest.xml and WORK_FOLDER is correctly set to proj_root_work. the layout is like the following: proj_root-_work [DIR] (some stuff gets preprocessed and all finally copied here - src, res, assets and the AndroidManifest.xml) -build.xml -build.properties then i call ant for my build configuration from the proj_root and it complains about not finding AndroidManifest.xml. when i put it in my proj_root it works, but i really want it in my _Work directory. i read that the file HAS TO BE in the root, what is the point then of the build.properties entry? also i saw some custom parameters to aapt.exe ... any hints welcome, thx

    Read the article

  • problem processing xml in flex3

    - by john
    Hi All, First time here asking a question and still learning on how to format things better... so sorry about the format as it does not look too well. I have started learning flex and picked up a book and tried to follow the examples in it. However, I got stuck with a problem. I have a jsp page which returns xml which basically have a list of products. I am trying to parse this xml, in other words go through products, and create Objects for each product node and store them in an ArrayCollection. The problem I believe I am having is I am not using the right way of navigating through xml. The xml that is being returned from the server looks like this: <?xml version="1.0" encoding="ISO-8859-1"?><result type="success"> <products> <product> <id>6</id> <cat>electronics</cat> <name>Plasma Television</name> <desc>65 inch screen with 1080p</desc> <price>$3000.0</price> </product> <product> <id>7</id> <cat>electronics</cat> <name>Surround Sound Stereo</name> <desc>7.1 surround sound receiver with wireless speakers</desc> <price>$1000.0</price> </product> <product> <id>8</id> <cat>appliances</cat> <name>Refrigerator</name> <desc>Bottom drawer freezer with water and ice on the door</desc> <price>$1200.0</price> </product> <product> <id>9</id> <cat>appliances</cat> <name>Dishwasher</name> <desc>Large capacity with water saver setting</desc> <price>$500.0</price> </product> <product> <id>10</id> <cat>furniture</cat> <name>Leather Sectional</name> <desc>Plush leather with room for 6 people</desc> <price>$1500.0</price> </product> </products></result> And I have flex code that tries to iterate over products like following: private function productListHandler(e:JavaFlexStoreEvent):void { productData = new ArrayCollection(); trace(JavaServiceHandler(e.currentTarget).response); for each (var item:XML in JavaServiceHandler(e.currentTarget).response..product ) { productData.addItem( { id:item.id, item:item.name, price:item.price, description:item.desc }); } } with trace, I can see the xml being returned from the server. However, I cannot get inside the loop as if the xml was empty. In other words, JavaServiceHandler(e.currentTarget).response..product must be returning nothing. Can someone please help/point out what I could be doing wrong. My JavaServiceHandler class looks like this: package com.wiley.jfib.store.data { import com.wiley.jfib.store.events.JavaFlexStoreEvent; import flash.events.Event; import flash.events.EventDispatcher; import flash.net.URLLoader; import flash.net.URLRequest; public class JavaServiceHandler extends EventDispatcher { public var serviceURL:String = ""; public var response:XML; public function JavaServiceHandler() { } public function callServer():void { if(serviceURL == "") { throw new Error("serviceURL is a required parameter"); return; } var loader:URLLoader = new URLLoader(); loader.addEventListener(Event.COMPLETE, handleResponse); loader.load(new URLRequest(serviceURL)); // var httpService:HTTPService = new HTTPService(); // httpService.url = serviceURL; // httpService.resultFormat = "e4x"; // httpService.addEventListener(Event.COMPLETE, handleResponse); // httpService.send(); } private function handleResponse(e:Event):void { var loader:URLLoader = URLLoader(e.currentTarget); response = XML(loader.data); dispatchEvent(new JavaFlexStoreEvent(JavaFlexStoreEvent.DATA_LOADED) ); // var httpService:HTTPService = HTTPService(e.currentTarget); // response = httpService.lastResult.product; // dispatchEvent(new JavaFlexStoreEvent(JavaFlexStoreEvent.DATA_LOADED) ); } } } Even though I refer to this as mine and it is not in reality. This is from a Flex book as a code sample which does not work, go figure. Any help is appreciated. Thanks john

    Read the article

  • Out of the box approach to upload XML file to the BIRT Server for Processing

    - by Paul
    Hello, I have the BIRT Report Server configured in TOMCAT and it works fine when running reports that require an XML datasource, but that XML file has be available on the network in order for the server to find it and run. Is there an out of the box configuration in the BIRT server that will prompt the user to upload the XML file directly to the server when they try to run a given report that requires an XML data source? This would be handy for users that have the XML datasource stored locally on their C drive and not have to move them to a network server in order to be read by BIRT. Thanks in advance. Paul

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >