Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 1026/1167 | < Previous Page | 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033  | Next Page >

  • What's wrong with debugging in Eclipse on Android?

    - by Sebastian Dwornik
    I've obviously been spoiled by Visual Studio, because although I'm just learning Android and the Eclipse environment, debugging apps in Eclipse is becoming a serious detriment to further development. For example, Eclipse will compile this divide by zero just fine: public class Lesson2Main extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate (savedInstanceState); int i = 1 / 0; TextView tv = new TextView (this); tv.setText ("Hello, Android!"); setContentView (tv); } } And then, when it executes it under the debugger, I will get a full screen of useless debug info, non of which actually points me to the specific line containing the error. The stackTrace is null within the exception ('e') info tree, and it simply states a message stating 'ArithmeticException'. (that's nice, how about you point me in the direction of where you found it!?) I've looked all over the screen and am baffled that this IDE can't get this right. Does developing with Eclipse resort everyone back to 1991 with printf() like logging at every interval then to track down bugs? Seriously. Is there a configuration or plug-in that I'm missing to help with this? I haven't tested this case with XCode, but if the iPhone dev. IDE handles this more like Visual Studio, then no wonder the Android marketplace has so few apps. I'm excited about Android, but it seems that Eclipse is getting in the way.

    Read the article

  • Struts2 Sitemesh not working in WAS 6 server

    - by Jithin
    I have a struts2-spring application which works fine in jetty server but when i try migrating it to WAS 6 the decorator(sitemesh) is not getting applied. The server logs shows no error.Is this a known issue ? my web.xml looks like this <filter> <filter-name>action2-cleanup</filter-name> <filter-class>org.apache.struts2.dispatcher.ActionContextCleanUp</filter-class> </filter> <filter> <filter-name>sitemesh</filter-name> <filter-class>com.opensymphony.module.sitemesh.filter.PageFilter</filter-class> </filter> <filter> <filter-name>action2</filter-name> <filter-class>org.apache.struts2.dispatcher.FilterDispatcher</filter-class> </filter> <filter-mapping> <filter-name>action2</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>action2-cleanup</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>sitemesh</filter-name> <url-pattern>/*</url-pattern> <dispatcher>FORWARD</dispatcher> <dispatcher>INCLUDE</dispatcher> <dispatcher>REQUEST</dispatcher> </filter-mapping>

    Read the article

  • How do I stop the m2eclipse plugin interfering with command line mvn builds?

    - by locka
    I use the m2eclipse plugin in Eclipse so that I can import a Maven project. The plugin reads the pom.xml and sorts out the dependencies in the projects in an Eclipse friendly way so I'm not looking at a sea of broken references and errors. I use Eclipse for code development however I usually build the projects from the command line, e.g. "mvn clean install". Unfortunately when I do this, m2eclipse detects disk activity and attempts to rebuild the workspace. This interferes with the command line build and sometimes results in a race condition. For example the command line might be in its clean phase but fails because it tries to delete a file or directory which is locked during the workspace rebuild. Aside from that workspace rebuilding is incredibly slow, and between failed builds and wasted CPU my build process is 2-3x longer than it should be. It isn't an option to not use Eclipse (e.g. to use Netbeans), or to disable m2eclipse. It is a useful plugin except for this behaviour. So my question is, how do I stop m2eclipse from rebuilding the workspace all the time? Can I invoke a manual refresh and otherwise disable this behaviour?

    Read the article

  • How to copy x and y coordinate from one Flex component to another

    - by Tam
    Hi, I would like to base one component's x and y cooridnates according to another, I tried using the binding notation but it doesn't seem to work! <?xml version="1.0" encoding="utf-8"?> <s:VGroup xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/halo" xmlns:vld ="com.lal.validators.*" xmlns:effect="com.lal.effects.*" xmlns:components="com.lal.components.*" width="400" height="100%" right="0" horizontalAlign="right" verticalCenter="0"> ...... <s:VGroup width="125" height="100%" horizontalAlign="right" gap="0" width.normal="153" x.normal="247" width.expanded="199" x.expanded="201"> ...... <s:Panel includeIn="expanded" id="buttonsGroup" mouseOut="changeStateToNormal();" mouseOver="stopChangeToNormal();" skinClass="com.lal.skins.TitlelessPanel" title="hi" right="0" width="125" height="700" > ..... <s:Label text="Jump To Date" paddingTop="20" /> <s:TextInput id="wholeDate" width="100" mouseOver="stopChangeToNormal();" click="date1.visible = true" focusOut="date1.visible = false"/> ... </s:Panel> </s:VGroup> <mx:DateChooser id="date1" change="useDate(event); this.visible = false; " visible="false" mouseOver="stopChangeToNormal();" y="{wholeDate.y}" x="{wholeDate.x}" /> </s:VGroup>

    Read the article

  • broken SQL 2008 SP1 Web Edition (can not login with SSMS)

    - by gerryLowry
    Scenario: My installation of SQL Server 2008 Web Edition SP1 was working properly. Since I've recently joined Microsoft's Website Spark*, I removed SQL2008 and installed SQL 2008 again using my Website Spark edition and license from the MSDN download site. Next, I updated SQL 2008 to SP1 (this is required because I'm running Windows 2008 Server R2 Web edition). When I launch SSMS (SQL Server Management Studio), "User name" is "myhost\Administrator" and is greyed out so it can not be changed. When I installed my Website Spark version, I did not include "myhost\Administrator" when I was configuring SQL 2008 service accounts. Instead I created an administrator account "myhost\mySQLaccount". ERROR MESSAGE: Connect to Server (X) Cannot connect to (local) Additional information: Login failed for user 'myhost'Admistrator' (Microsoft SQL Server, Error: 18456) I tried to use the SQL Server Configuration Manager to correct this problem but could not find any useful way to fix this issue. How to I fix this problem? Connect to Server ... Server type: Database Engine Server name: (local) Authentication: Windows Authentication Please advise. Thank you. Gerry * http://www.microsoft.com/web/websitespark/default.aspx

    Read the article

  • Using libcurl to create a valid POST

    - by Haraldo
    static int get( const char * cURL, const char * cParam ) { CURL *handle; CURLcode result; std::string buffer; char errorBuffer[CURL_ERROR_SIZE]; //struct curl_slist *headers = NULL; //headers = curl_slist_append(headers, "Content-Type: Multipart/Related"); //headers = curl_slist_append(headers, "type: text/xml"); // Create our curl handle handle = curl_easy_init(); if( handle ) { curl_easy_setopt(handle, CURLOPT_ERRORBUFFER, errorBuffer); //curl_easy_setopt(handle, CURLOPT_HEADER, 0); //curl_easy_setopt(handle, CURLOPT_HTTPHEADER, headers); curl_easy_setopt(handle, CURLOPT_POST, 1); curl_easy_setopt(handle, CURLOPT_POSTFIELDS, cParam); curl_easy_setopt(handle, CURLOPT_POSTFIELDSIZE, strlen(cParam)); curl_easy_setopt(handle, CURLOPT_FOLLOWLOCATION, 1); curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, Request::writer); curl_easy_setopt(handle, CURLOPT_WRITEDATA, &buffer); curl_easy_setopt(handle, CURLOPT_USERAGENT, "libcurl-agent/1.0"); curl_easy_setopt(handle, CURLOPT_URL, cURL); result = curl_easy_perform(handle); curl_easy_cleanup(handle); } if( result == CURLE_OK ) { return atoi( buffer.c_str() ); } return 0; } Hi there, first of all I'm having trouble debugging this in visual studio express 2008 so I'm unsure what buffer.c_str() might actually be returning but I am outputting 1 or 0 to the web page being posted to. Therefore I'm expecting the buffer to be one or the other, however I seem to only be returning 0 or equivalent. Does the code above look like it will return what I expect or should my variable types be different? The conversion using "atoi" may be an issue. Any thought would be much appreciated.

    Read the article

  • Open PDF Content files in ASP.NET MVC 2

    - by mcbingo
    I want to provide simple href links to my PDF forms that reside in my Forms folder. I have a created a simple Index.aspx and FormController Index action that simple iterates through the list of PDF files using my FormMetaData.xml file. The links get created just fine but when you click on the links I get a 404 exception. That looks like this: Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /Forms/ccindteamgolfform.pdf Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927 This seems like this should open up a new browser window with the PDF in it but perhaps I am making a bad assumption. The PDF content files have Build Action of Content and Copy to Output set to Copy Always. Here is an example output html for the link from my Index.aspx page: <span class="form"> <a href="Forms/ccindteamgolfform.pdf" target="_blank"> <span class="description">Entry Form</span></span> I must be missing something because this does not work. Do I need to add a MapRoute for these documents? Or am I missing something else with the routing? This seems like it should not be that difficult.

    Read the article

  • About Data Objects and DAO Design when using Hibernate

    - by X. Ma
    I'm hesitating between two designs of a database project using Hibernate. Design #1. (1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it. (2) Create a database library with Hibernate. This library implements the data provider interface in (1). The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects. Design #2. Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate. I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider? Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2? What do you think about this? Thanks for your time!

    Read the article

  • Ant: How to force java compilation if classpath jars have changed but sources have not

    - by Ittai
    Hi, I have the following usecase: I have a java project (myProj) which uses a common.jar from a different project(common). I want the javac ant task to work even if the sources of myProj have not changed if the common.jar has changed (as the sources of myProj depend of it and might be invalid now). I have a task which copies the common.jar from a central location to the myProj lib if it has changed and I can use it to set a property whether or not to "force" compilation so that end is taken care of. I'm not sure how (or if) I can tell the javac task to try and compile anyway? I don't want to change myProj's sources (or timestamps) so that the task will start. Excerpt from the ant build.xml file: <path id="project.class.path"> <pathelement location... /> ... <fileset dir="lib" includes="**/*.jar" /> </path> <target name="copyLibs" > <copy file="${central.loc}/common.jar" todir="lib" /> ... </target> <target name="javac" > <javac srcdir="src" includes="**" excludes=... > <classpath refid="project.class.path"/> </javac> </target> Thanks in advance, Ittai

    Read the article

  • Re-tweets and replies with JTwitter

    - by Samuh
    I have not used Twitter enough to become familiar with its terminology or the way it works, so please help me in understanding the problem I have at hand. I am getting last 20 status updates posted by some Twitter user via RSS feed, the feed XML is parsed and the statuses are displayed in a ListView. Which means that I have the original tweet in a String variable(row of ListView). When I click a ListView item, I get the option of "Re-tweeting" and "post reply". As, I understand it, when re-tweeting I will have to just update my *status* as: RT @orig-poster <original tweet> and when posting a reply I will have to just update my status as: @orig-poster <my tweet> I skimmed through the JavaDocs of the Jwitter library(Twitter class) and found a setStatus(String) method. I dont think I will have to make use of retweet() or reply() functions of the Twitter class in JTwitter library. Is my understanding correct? Please correct me if I am wrong here or missing anything. Thanks!

    Read the article

  • How to parse rss from a php page, using jQuery/jFeed?

    - by ricebowl
    I'm trying to fumble my way through parsing rss sensibly, using jQuery and jFeed. Because of the same origin policy I'm pulling the BBC's health news feed into a local page (http://www.davidrhysthomas.co.uk/play/proxy.php). Originally this was just the same proxy.php script as available in the jFeed download package, but due to my host's disabling allow_url_fopen() I've amended the php to the following: $url = "http://newsrss.bbc.co.uk/rss/newsonline_uk_edition/health/rss.xml"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $data = curl_exec($ch); echo "$data"; curl_close($ch); Which seems to generate the same/comparable contents as the original fopen on my local machine. Now that seems to be working, I'm looking at setting the jFeed script up to work with the page and, to my embarrassment, don't see how. I understand that, at the least, this should work: jQuery.getFeed({ url: 'http://www.davidrhysthomas.co.uk/play/proxy.php', success: function(feed) { alert(feed.title); } }); ...but, as I'm sure you anticipate, it doesn't. What non-output there is, is available for your perusal here: http://www.davidrhysthomas.co.uk/play/exampleTest.html. And I honestly don't have a clue what to do about it. If anyone could offer some pointers, tips, hints, or, at a pinch, a quick slap around the cheeks and a 'pull yourself together!' it'd be much appreciated... Thanks in advance =)

    Read the article

  • How to restrict access to a class's data based on state?

    - by Marcus Swope
    In an ETL application I am working on, we have three basic processes: Validate and parse an XML file of customer information from a third party Match values received in the file to values in our system Load customer data in our system The issue here is that we may need to display the customer information from any or all of the above states to an internal user and there is data in our customer class that will never be populated before the values have been matched in our system (step 2). For this reason, I would like to have the values not even be available to be accessed when the customer is in this state, and I would like to have to avoid some repeated logic everywhere like: if (customer.IsMatched) DisplayTextOnWeb(customer.SomeMatchedValue); My first thought for this was to add a couple interfaces on top of Customer that would only expose the properties and behaviors of the current state, and then only deal with those interfaces. The problem with this approach is that there seems to be no good way to move from an ICustomerWithNoMatchedValues to an ICustomerWithMatchedValues without doing direct casts, etc... (or at least I can't find one). I can't be the first to have come across this, how do you normally approach this? As a last caveat, I would like for this solution to play nice with FluentNHibernate :) Thanks in advance...

    Read the article

  • C#: Easy access to the member of a singleton ICollection<> ?

    - by Rosarch
    I have an ICollection that I know will only ever have one member. Currently, I loop through it, knowing the loop will only ever run once, to grab the value. Is there a cleaner way to do this? I could alter the persistentState object to return single values, but that would complicate the rest of the interface. It's grabbing data from XML, and for the most part ICollections are appropriate. // worldMapLinks ensured to be a singleton ICollection<IDictionary<string, string>> worldMapLinks = persistentState.GetAllOfType("worldMapLink"); string levelName = ""; //worldMapLinks.GetEnumerator().Current['filePath']; // this loop will only run once foreach (IDictionary<string, string> dict in worldMapLinks) // hacky hack hack hack { levelName = dict["filePath"]; } // proceed with levelName loadLevel(levelName); Here is another example of the same issue: // meta will be a singleton ICollection<IDictionary<string, string>> meta = persistentState.GetAllOfType("meta"); foreach (IDictionary<string, string> dict in meta) // this loop should only run once. HACKS. { currentLevelName = dict["name"]; currentLevelCaption = dict["teaserCaption"]; } Yet another example: private Vector2 startPositionOfKV(ICollection<IDictionary<string, string>> dicts) { Vector2 result = new Vector2(); foreach (IDictionary<string, string> dict in dicts) // this loop will only ever run once { result.X = Single.Parse(dict["x"]); result.Y = Single.Parse(dict["y"]); } return result; }

    Read the article

  • What's next for all of these Microsoft "overlapping" and "enhanced" products ?

    - by pointlesspolitics
    Recently I attended a road show, organised by MS Gold Partner company in the UK. The products discussed were: SharePoint server (2010 and 2007), Exchange server, Office Communication Server 2007, Exchange hosted services Office Live meeting, Office Communicator, System Center Configuration Manager and Operation Manager, VMware, Windows 7 etc. As Microsoft claims the enhancement in the each product against higher version, I felt that clients are not much interested in all these details. For example Office Communicator, surely they have improved a lot the product and first site all said 'WOW' great product, but nobody wish to pay money for all these extra features. Some argued, they are bogged down by all these increased number of menus. They don't need soft call feature included with mobile call. It apply for all other products as well such as MS office (next what 2 ribbons ?), windows OS and many more. Indeed there must be good features in all these products, but is it worth to spend money and time to update the older system ? Also sometimes these feature will decrease the productivity instead increase it. *So do you think what ever enhancement MS is doing in the products is only for selling purpose, not a real use ?? and I think also keep the developer busy learning the new tools and features. * I am sure some some people here will argue that some people need this sort of features. But I am not talking about NASA or MI5 guys. I am talking of usual businesses and joe public. Any ideas welcome.

    Read the article

  • Problem with Spring security's logout

    - by uther-lightbringer
    Hello, I've got a problem logging out in Spring framework. First when I want j_spring_security_logout to handle it for me i get 404 j_spring_security_logout not found: sample-security.xml: <http> <intercept-url pattern="/messageList.htm*" access="ROLE_USER,ROLE_GUEST" /> <intercept-url pattern="/messagePost.htm*" access="ROLE_USER" /> <intercept-url pattern="/messageDelete.htm*" access="ROLE_ADMIN" /> <form-login login-page="/login.jsp" default-target-url="/messageList.htm" authentication-failure-url="/login.jsp?error=true" /> <logout/> </http> Sample url link to logout in JSP page: <a href="<c:url value="/j_spring_security_logout" />">Logout</a> When i try to use a custom JSP page i.e. I use login form for this purpose then I get better result at least it gets to login page, but another problem is that you dont't get logged off as you can diretcly type url that should be guarded buy you get past it anyway. Slightly modified from previous listings: <http> <intercept-url pattern="/messageList.htm*" access="ROLE_USER,ROLE_GUEST" /> <intercept-url pattern="/messagePost.htm*" access="ROLE_USER" /> <intercept-url pattern="/messageDelete.htm*" access="ROLE_ADMIN" /> <form-login login-page="/login.jsp" default-target-url="/messageList.htm" authentication-failure-url="/login.jsp?error=true" /> <logout logout-success-url="/login.jsp" /> </http> <a href="<c:url value="/login.jsp" />">Logout</a> Thank you for help

    Read the article

  • SQL query in JSP file pulling variable from VXML file

    - by s1066
    Hi I'm trying to get an SQL query to work within a JSP file. The JSP file is pulled by a VXML file here is my JSP file code: <?xml version="1.0"?> <%@ page import="java.util.*" %> <%@ page import="java.sql.*" %> <% boolean success = true; // Always optimistic String info = ""; String schoolname = request.getParameter("schoolname"); String informationtype = request.getParameter("informationtype"); try { Class.forName("org.postgresql.Driver"); String connectString = "jdbc:postgresql://localhost:5435/N0176359"; String user = "****"; String password = "*****"; Connection conn = DriverManager.getConnection(connectString, user, password); Statement st = conn.createStatement(); ResultSet rsvp = st.executeQuery("SELECT * FROM lincolnshire_school_information_new WHERE school_name=\'"+schoolname+"\'"); rsvp.next(); info = rsvp.getString(2); }catch (ClassNotFoundException e) { success = false; // something went wrong } %> As you can see I'm trying to insert the value of the variable declared as "schooname" into the end of the SQL query. However when I come to run the jsp file it doesn't work and I get an error "ResultSet not positioned properly". When I put a standard query in (without trying to make it value of the variable it works fine) Hope that makes sense, and thank you for any help!

    Read the article

  • navigateToURL with GET parameters in local SWF

    - by Michael Brewer-Davis
    I'm running a Flex application locally (local-with-filesystem or local-trusted), and I'm trying to call navigateToURL to a local page using GET parameters. Flash Player seems to be ignoring the parameters when opening the local page, though. I've been scouring the Flash security pages to find a documented prohibition for this, but haven't found anything. Pointers? How would you work around this issue? My Flex app: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Script> <![CDATA[ private function onClick(event:MouseEvent):void { var request:URLRequest = new URLRequest("target.html"); request.data = new URLVariables(); request.data.text = "stackoverflow.com"; navigateToURL(request); } ]]> </mx:Script> <mx:Button label="Go" click="onClick(event)" /> </mx:Application> And my target.html: <html> <head> <script language="JavaScript"> <!-- function showURL() { alert(window.location.href); } //--> </script> </head> <body onload="showURL()" /> </html>

    Read the article

  • slow SQL command

    - by Retrocoder
    I need to take some data from one table (and expand some XML on the way) and put it in another table. As the source table can have thousands or records which caused a timeout I decided to do it in batches of 100 records. The code is run on a schedule so doing it in batches works ok for the customer. If I have say 200 records in the source database the sproc runs very fast but if there are thousands it takes several minutes. I'm guessing that the "TOP 100" only takes the top 100 after it has gone through all the records. I need to change the whole code and sproc at some point as it doesn't scale but for now is there a quick fix to make this run quicker ? INSERT INTO [deviceManager].[TransactionLogStores] SELECT TOP 100 [EventId], [message].value('(/interface/mac)[1]', 'nvarchar(100)') AS mac, [message].value('(/interface/device) [1]', 'nvarchar(100)') AS device_type, [message].value('(/interface/id) [1]', 'nvarchar(100)') AS device_id, [message].value('substring(string((/interface/id)[1]), 1, 6)', 'nvarchar(100)') AS store_id, [message].value('(/interface/terminal/unit)[1]', 'nvarchar(100)') AS unit, [message].value('(/interface/terminal/trans/event)[1]', 'nvarchar(100)') AS event_id, [message].value('(/interface/terminal/trans/data)[1]', 'nvarchar(100)') AS event_data, [message].value('substring(string((/interface/terminal/trans/data)[1]), 9, 11)', 'nvarchar(100)') AS badge, [message].value('(/interface/terminal/trans/time)[1]', 'nvarchar(100)') AS terminal_time, MessageRecievedAt_UTC AS db_time FROM [deviceManager].[TransactionLog] WHERE EventId > @EventId --WHERE MessageRecievedAt_UTC > @StartTime AND MessageRecievedAt_UTC < @EndTime ORDER BY terminal_time DESC

    Read the article

  • Book recommendation for Silverlight

    - by Mathias Weyel
    Hi there, yet another question for recommendations for a book on Silverlight. I look for a book that covers the UI and styling and, if possible, custom drawing and graphics. Very important for me is the style of the book - it should focus on the actual programming and not on where to click in Visual Studio to get things done. Let's take a fictional example for proper usage of the DataGrid control: Bad: To use the data grid, drag it from the toolbox onto the control. You can change the background color by clicking on “Background” in the properties. To define custom columns, click on columns and edit them in the configuration window that opens. Good: To use the DataGrid, you need a reference to the blah dll and declare the namespace in the XAML like this (blah), the data model should be like blah and if you want to define how the columns look like, you need to define them like this (more blah). And if you want to do this in C# because you for whatever reason aren’t able or willing to use XAML, this would look like blah. Bonus points for coverage of topics like how to manage resources (images/fonts) and internationalization. There are quite some snippets on how to do that on the internet but somehow each of them looks like they work but are not a proper way of doing it. cheers Mathias

    Read the article

  • Connecting Error to Remote Oracle XE database using ASP.NET

    - by imsatasia
    Hello, I have installed Oracle XE on my Development machine and it is working fine. Then I installed Oracle XE client on my Test machine which is also working fine and I can access Development PC database from Browser. Now, I want to create an ASP.Net application which can access that Oracle XE database. I tried it too, but it always shows me an error on my TEST machine to connect database to the Development Machine using ASP.Net. Here is my code for ASP.Net application: protected void Page_Load(object sender, EventArgs e) { string connectionString = GetConnectionString(); OracleConnection connection = new OracleConnection(connectionString); connection.Open(); Label1.Text = "State: " + connection.State; Label1.Text = "ConnectionString: " + connection.ConnectionString; OracleCommand command = connection.CreateCommand(); string sql = "SELECT * FROM Users"; command.CommandText = sql; OracleDataReader reader = command.ExecuteReader(); while (reader.Read()) { string myField = (string)reader["nID"]; Console.WriteLine(myField); } } static private string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "User Id=System;Password=admin;Data Source=(DESCRIPTION=" + "(ADDRESS=(PROTOCOL=TCP)(HOST=myServerAddress)(PORT=1521))" + "(CONNECT_DATA=(SERVICE_NAME=)));"; }

    Read the article

  • Serial.begin(speed, config) not compiling for Leonardo Board

    - by forgemo
    I would like to configure my serial communication to have no parity, 1 start- and 2 stop-bits. The documentation for Serial.begin(speed, config) states: (...) An optional second argument configures the data, parity, and stop bits. The default is 8 data bits, no parity, one stop bit. The documentation also lists the possible configuration-values. According to my (limited) understanding, I need SERIAL_7N2 or SERIAL_8N2 to meet my requirements. (I'm not sure how the data-bits relate to the the 1-start-bit that I need.) However, I can't even compile because I have no idea how to supply that config value to the begin method. (I don't have much Arduino/C++ experience) I've tried in my code the following two variants: Serial.begin(9600, SERIAL_8N2); Serial.begin(9600, "SERIAL_8N2"); Am I missing something? Additional Information: Serial.begin(speed, config) has been introduced with the latest Arduino 1.0.2 IDE version. The code defining/implementing the begin methods can be found here. HardwareSerial.h HardwareSerial.cpp Edit: According to the replies from PeterJ and borges, the following variant is correct. Serial.begin(9600, SERIAL_8N2); However, it's still not working. I found that the compile error doesn't occur if I change the configured board from my Arduino Leonardo to Arduino uno. Therefore, it could be a bug occurring only with a subset of boards ... or maybe it's not supported?!

    Read the article

  • Building a custom (dynamic) dataset and grid

    - by marko.ivanovski.nz
    Hi, I'm in the process of building a dynamic table in which you can add/remove rows & columns so it varies in size depending on what the user wants. Its purpose is to store properties for a product, but there can be from 1 to 10 different properties(columns) per product, and multiple instances(rows) of the product as well. Here's a screenshot of what I mean http://i40.tinypic.com/nbqkxc.jpg As you can see I need the structure to be completely up to the client which is where I'm getting stuck. I've started writing a custom DataSet that has "add column" & "add row" buttons, and have built a custom Table with Textboxes in each cell which builds from that dataset. I have no idea how to store the data on submit though, and to make it even more complex I need to store this in the database as a string which I think I can do by converting it to XML. Any help is appreciated, I think I just need a pointer in the right direction and am happy to do research from there. Thanks in advance. Marko

    Read the article

  • Can I split system.serviceModel into a separate .config file?

    - by Mr Bell
    I want to separate my system.serviceModel section of the web.config into a separate file to facilitate some environment settings. My efforts have been fruitless. When I attempt it using this method. The wcf code throws an exception: "The type initializer for 'System.ServiceModel.ClientBase 1 threw an exception. Can anyone tell me what I am doing wrong? Web.config: <configuration> <system.serviceModel configSource="MyWCF.config" /> .... MyWCF.config: <system.serviceModel> <extensions> ... </extensions> <bindings> ... </bindings> <behaviors> ... </behaviors> <client> ... </client> </system.serviceModel>

    Read the article

  • Can I return JSON from an .asmx Web Service if the ContentType is not JSON?

    - by Click Ahead
    I would like to post a form using ajax and jquery to a .asmx webservice and return the value from the webservice as JSON. I'm using ASP.NET 4.0. I know that in order to return JSON from a webservice the following needs to be set (1) dataType: "json" (2) contentType: "application/json; charset=utf-8", (3) type: "POST" (4) set the Data to something. I have tested this and it works fine (i.e. my webservice returns the data as JSON) if all *four are set*. But, lets say in my case I want to do a standard form post i.e. test1=value1&test2=value2 so the contentType is not JSON but I want back JSON {test1:value1}. This doesn't seem to work because the contentType is "application/x-www-form-urlencoded" not "application/json; charset=utf-8". Can anyone tell me why I can't do this? It seems crazy to me that you have to explicitly send JSON to get JSON back, yet if you don't use JSON (i.e. post urlencoded contenttype) then the webservice will return XML. Any insights are greatly appreciated :)

    Read the article

  • Core Data Inferred Migration – Automatic "lightweight" vs Manual

    - by ohhorob
    I've updated the model of an existing iPhone app in some simple ways (remove attribute, add attribute, remove index), and can use automatic lightweight migration to migrate the persistent store. Due to the typical size of the data set, the processing time is not insignificant, and warrants feedback for the user. NSMigrationManager provides a simple but useful migrationProgress value that sends KVO notifications as the migration is performed. That forms the basis of providing feedback, however attempting to use an inferred model ([NSMappingModel inferredMappingModelForSourceModel:destinationModel:error:]) results in drastically different timing for the exact same dataset. Profile results on and original iPhone (2G) Automatic inferred lightweight migration PROFILE: CacheManager -migrateStore PROFILE: 0.6130 (+0.6130) models loaded PROFILE: 1.1759 (+0.5629) delegate -CacheManagerWillMigrate: PROFILE: 1.2516 (+0.0757) persistent store coordinator loaded PROFILE: 5.1436 (+3.8920) automatic lightweight migration completed PROFILE: 5.5435 (+0.3999) delegate -CacheManagerDidFinishMigration:withError: Manual inferred migration PROFILE: CacheManager -migrateStore PROFILE: 0.6660 (+0.6660) models loaded PROFILE: 1.1471 (+0.4811) inferred mapping model generated PROFILE: 1.4046 (+0.2574) delegate -CacheManagerWillMigrate: PROFILE: 1.5058 (+0.1013) persistent store coordinator loaded PROFILE: 22.6952 (+21.1894) manual migration completed PROFILE: 23.1478 (+0.4525) delegate -CacheManagerDidFinishMigration:withError: So, with an inferred model, the manual migration takes over 5 times longer than automatic! It's a big inconsistency, and the lightweight option that NSPersistentStoreCoordinator -addPersistentStoreWithType:configuration:URL:options:error: provides absolutely no indication of progress while processing. Can anybody provide a supported way to get the migrationProgress values during automatic migration, OR a way to configure an inferred mapping model to be as fast during manual processing as automatic?

    Read the article

< Previous Page | 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033  | Next Page >