Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 490/654 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • wcf callback exception after updating to .net 4.0

    - by James
    I have a wcf service that uses callbacks with DualHttpBindings. The service pushes back a datatable of search results the client (for a long running search) as it finds them. This worked fine in .Net 3.5. Since I updated to .Net 4.0, it bombs out with a System.Runtime.FatalException that actually kills the IIS worker process. I have no idea how to even go about starting to fix this. Any recommendations appreciated. The info from the resulting event log is pasted below. An unhandled exception occurred and the process was terminated. Application ID: /LM/W3SVC/2/ROOT/CP Process ID: 5284 Exception: System.Runtime.FatalException Message: Object reference not set to an instance of an object. StackTrace: at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage31(MessageRpc& rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet) at System.ServiceModel.Dispatcher.ChannelHandler.DispatchAndReleasePump(RequestContext request, Boolean cleanThread, OperationContext currentOperationContext) at System.ServiceModel.Dispatcher.ChannelHandler.HandleRequest(RequestContext request, OperationContext currentOperationContext) at System.ServiceModel.Dispatcher.ChannelHandler.AsyncMessagePump(IAsyncResult result) at System.Runtime.Fx.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result) at System.Runtime.AsyncResult.Complete(Boolean completedSynchronously) at System.Runtime.InputQueue1.AsyncQueueReader.Set(Item item) at System.Runtime.InputQueue1.Dispatch() at System.ServiceModel.Channels.ReliableDuplexSessionChannel.ProcessDuplexMessage(WsrmMessageInfo info) at System.ServiceModel.Channels.ReliableDuplexSessionChannel.HandleReceiveComplete(IAsyncResult result) at System.ServiceModel.Channels.ReliableDuplexSessionChannel.OnReceiveCompletedStatic(IAsyncResult result) at System.Runtime.Fx.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result) at System.Runtime.AsyncResult.Complete(Boolean completedSynchronously) at System.ServiceModel.Channels.ReliableChannelBinder1.InputAsyncResult1.OnInputComplete(IAsyncResult result) at System.Runtime.Fx.AsyncThunk.UnhandledExceptionFrame(IAsyncResult result) at System.Runtime.AsyncResult.Complete(Boolean completedSynchronously) at System.Runtime.InputQueue1.AsyncQueueReader.Set(Item item) at System.Runtime.InputQueue1.Dispatch() at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) InnerException: * System.NullReferenceException* Message: Object reference not set to an instance of an object. StackTrace: at System.Web.HttpApplication.ThreadContext.Enter(Boolean setImpersonationContext) at System.Web.HttpApplication.OnThreadEnterPrivate(Boolean setImpersonationContext) at System.Web.AspNetSynchronizationContext.CallCallbackPossiblyUnderLock(SendOrPostCallback callback, Object state) at System.Web.AspNetSynchronizationContext.CallCallback(SendOrPostCallback callback, Object state) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc& rpc)

    Read the article

  • Slow MySQL Query Breaking my back!

    - by Chris n
    so, I have tried everything I can think of, and can't get this query to happen in less than 3 seconds on my local server. I know the problem has to do with the OR referencing both the owner_id and the person_id. if I run one or the other it happens instantly, but together with an or I can't seem to make it work - I looked into rewriting the code, but the way the app was designed it won't be easy. is there a way I can call an equivalent or that won't take so long? here is the sql: SELECT event_types.name as event_type_name,event_types.id as id, count(events.id) as count,sum(events.estimated_duration) as time_sum FROM events,event_types WHERE event_types.id = events.event_type_id AND events.event_type_id != '4' AND ( events.status!='cancelled') AND events.event_type_id != 64 AND ( events.owner_id = 161 OR events.person_id = 161 ) GROUP BY event_types.name ORDER BY event_types.name DESC; Here's the Explain soup, although I'm guessing it's unnecessary cause there is probably a better way to structure that or that is obvious: thanks so much! chris. +----+-------------+-------------+-------+---------------------------------------------------------------------------------------------------------+-------------------------------+---------+-------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------+-------+---------------------------------------------------------------------------------------------------------+-------------------------------+-- | 1 | SIMPLE | event_types | range | PRIMARY | PRIMARY | 4 | NULL | 78 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | events | ref | index_events_on_status,index_events_on_event_type_id,index_events_on_person_id,index_events_on_owner_id | index_events_on_event_type_id | 5 | thenumber_production.event_types.id | 907 | Using where | +----+-------------+-------------+-------+---------------------------------------------------------------------------------------------------------+-------------------------------+---------+-------------------------------------+------+----------------------------------------------+

    Read the article

  • httprequest handle time delays till having response

    - by bourax webmaster
    I have an application that calls a function to send JSON object to a REST API, my problem is how can I handle time delays and repeat this function till i have a response from the server in case of interrupted network connexion ?? I try to use the Handler but i don't know how to stop it when i get a response !!! here's my function that is called when button clicked : protected void sendJson(final String name, final String email, final String homepage,final Long unixTime,final String bundleId) { Thread t = new Thread() { public void run() { Looper.prepare(); //For Preparing Message Pool for the child Thread HttpClient client = new DefaultHttpClient(); HttpConnectionParams.setConnectionTimeout(client.getParams(), 10000); //Timeout Limit HttpResponse response; JSONObject json = new JSONObject(); //creating meta object JSONObject metaJson = new JSONObject(); try { HttpPost post = new HttpPost("http://util.trademob.com:5000/cards"); metaJson.put("time", unixTime); metaJson.put("bundleId", bundleId); json.put("name", name); json.put("email", email); json.put("homepage", homepage); //add the meta in the root object json.put("meta", metaJson); StringEntity se = new StringEntity( json.toString()); se.setContentType(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); post.setEntity(se); String authorizationString = "Basic " + Base64.encodeToString( ("tester" + ":" + "tm-sdktest").getBytes(), Base64.NO_WRAP); //Base64.NO_WRAP flag post.setHeader("Authorization", authorizationString); response = client.execute(post); String temp = EntityUtils.toString(response.getEntity()); Toast.makeText(getApplicationContext(), temp, Toast.LENGTH_LONG).show(); } catch(Exception e) { e.printStackTrace(); } Looper.loop(); //Loop in the message queue } }; t.start(); }

    Read the article

  • Code Golf: Easter Spiral

    - by friol
    What's more appropriate than a Spiral for Easter Code Golf sessions? Well, I guess almost anything. The Challenge The shortest code by character count to display a nice ASCII Spiral made of asterisks ('*'). Input is a single number, R, that will be the x-size of the Spiral. The other dimension (y) is always R-2. The program can assume R to be always odd and = 5. Some examples: Input 7 Output ******* * * * *** * * * * ***** * Input 9 Output ********* * * * ***** * * * * * * *** * * * * * ******* * Input 11 Output *********** * * * ******* * * * * * * * *** * * * * * * * * ***** * * * * * ********* * Code count includes input/output (i.e., full program). Any language is permitted. My easily beatable 303 chars long Python example: import sys; d=int(sys.argv[1]); a=[d*[' '] for i in range(d-2)]; r=[0,-1,0,1]; x=d-1;y=x-2;z=0;pz=d-2;v=2; while d>2: while v>0: while pz>0: a[y][x]='*'; pz-=1; if pz>0: x+=r[z]; y+=r[(z+1)%4]; z=(z+1)%4; pz=d; v-=1; v=2;d-=2;pz=d; for w in a: print ''.join(w); Now, enter the Spiral...

    Read the article

  • [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified - works

    - by Matt
    Hello, I am developing a java app (with odbc bridge - forgive me - the only paradox driver I have been able to obtain is the microsoft odbc driver) - which works fine while in eclipse, (and netbeans) - connecting and obtaining data from an ancient paradox 5.x database. So long as it is run from inside my IDE - it compiles and runs flawlessly. When I export it to a runable jar, suddenly [code][Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified[/code] occurs. The jar is being run on the same box as my developing IDE - so I am confused about the cause. It is being run via console from a user account, as per the IDE. My connection string is "jdbc:odbc:Driver={Microsoft Paradox Driver (*.db )};DriverID=538; Fil=Paradox 5.X; DefaultDir=C:\paradox\database\location\" - obtained from connectionstrings.com - and as mentioned before, working fine while run from the IDE. The above seems to 'magically' create its own connection, avoiding the setup of a dsn - I am unsure quite how it does - but it works. The only other thing I can think that might be pertinent is that my PC is a 64bit o/s (windows server 2008). Please help, any suggestions or comments will be greatly appreciated. Thanks, Matt

    Read the article

  • Isolated storage misunderstand

    - by Costa
    Hi this is a discussion between me and me to understand isolated storage issue. can you help me to convince me about isolated storage!! This is a code written in windows form app (reader) that read the isolated storage of another win form app (writer) which is signed. where is the security if the reader can read the writer's file, I thought only signed code can access the file! If all .Net applications born equal and have all permissions to access Isolated storage, where is the security then? If I can install and run Exe from isolated storage, why I don't install a virus and run it, I am trusted to access this area. but the virus or what ever will not be trusted to access the rest of file system, it only can access the memory, and this is dangerous enough. I cannot see any difference between using app data folder to save the state and using isolated storage except a long nasty path!! I want to try give low trust to Reader code and retest, but they said "Isolated storage is actually created for giving low trusted application the right to save its state". Reader code: private void button1_Click(object sender, EventArgs e) { String path = @"C:\Documents and Settings\All Users\Application Data\IsolatedStorage\efv5cmbz.ewt\2ehuny0c.qvv\StrongName.5v3airc2lkv0onfrhsm2h3uiio35oarw\AssemFiles\toto12\ABC.txt"; StreamReader reader = new StreamReader(path); var test = reader.ReadLine(); reader.Close(); } Writer: private void button1_Click(object sender, EventArgs e) { IsolatedStorageFile isolatedFile = IsolatedStorageFile.GetMachineStoreForAssembly(); isolatedFile.CreateDirectory("toto12"); IsolatedStorageFileStream isolatedStorage = new IsolatedStorageFileStream(@"toto12\ABC.txt", System.IO.FileMode.Create, isolatedFile); StreamWriter writer = new StreamWriter(isolatedStorage); writer.WriteLine("Ana 2akol we ashrab kai a3eesh wa akbora"); writer.Close(); writer.Dispose(); }

    Read the article

  • LPX-00607 for ora:contains in java but not sqlplus

    - by Windle
    Hey all, I am trying to doing some sql querys out of Oracle 11g and am having issues using ora:contains. I am using spring's jdbc impl and my code generates the sql statement: select * from view_name where column_a = ? and column_b = ? and existsNode(xmltype(clob_column), 'record/name [ora:contains(text(), "name1") 0]', 'xmlns:ora="http://xmlns.oralce.com/xdb"') = 1 I have removed the actual view / column names obviously, but when I copy that into sqlplus and substitute in random values, the select executes properly. When I try to run it in my DAO code I get this stack trace: org.springframework.jdbc.UncatergorizedSQLException: PreparedStatementCallback; uncatergorizedSQLException for SQL [the big select above]; SQL state [99999]; error code [31011]; ORA-31011: XML parsing failed. ORA-19202: Error occured in XML processing LPX-00607: Invalid reference: 'contains' ;nested exception is java.sql.SQLException: ORA-31011: XML parsing failed ORA-19202: Error occured in XML processing LPX-00607: Invalid reference: 'contains' (continues on like this for awhile....) I think it is worth mentioning that I am using maven and it is possible I am missing some dependency that is required for this. Sorry the post is so long, but I wanted to err on the side of too much info. Thanks for taking the time to read this at least =) -Windle

    Read the article

  • rails best practices where to place unobtrusive javascript

    - by nathanvda
    Hi there, my rails applications (all 2.3.5) use a total mix of inline javascript, rjs, prototype and jquery. Let's call it learning or growing pains. Lately i have been more and more infatuated with unobtrusive javascript. It makes your html clean, in the same way css cleaned it up. But most examples i have seen are small examples, and they put all javascript(jquery) inside application.js Now i have a pretty big application, and i am thinking up ways to structure my js. I like somehow that my script is still close to the view, so i am thinking something like orders.html.erb orders.js where orders.js contains the unobtrusive javascript specific to that view. But maybe that's just me being too conservative :) I have read some posts by Yehuda Katz about this very problem here and here, where he tackles this problem. It will go through your js-files and only load those relevant to your view. But alas i can't find a current implementation. So my questions: how do you best structure your unobtrusive javascript; manage your code, how do you make sure that it is obvious from the html what something is supposed to do. I guess good class names go a long way :) how do you arrange your files, load them all in? just a few? do you use content_for :script or javascript_include_tag in your view to load the relevant scripts. Or ... ? do you write very generic functions (like a delete), with parameters (add extra attributes?), or do you write very specific functions (DRY?). I know in Rails 3 there is a standard set, and everything is unobtrusive there. But how to start in Rails 2.3.5? In short: what are the best practices for doing unobtrusive javascript in rails? :)

    Read the article

  • Reloading a JTree during runtime

    - by Patrick Kiernan
    I create a JTree and model for it out in a class separate to the GUI class. The data for the JTree is extracted from a file. Now in the GUI class the user can add files from the file system to an AWT list. After the user clicks on a file in the list I want the JTree to update. The variable name for the JTree is schemaTree. I have the following code for the when an item in the list is selected: private void schemaListItemStateChanged(java.awt.event.ItemEvent evt) { int selection = schemaList.getSelectedIndex(); File selectedFile = schemas.get(selection); long fileSize = selectedFile.length(); fileInfoLabel.setText("Size: " + fileSize + " bytes"); schemaParser = new XSDParser(selectedFile.getAbsolutePath()); TreeModel model = schemaParser.generateTreeModel(); schemaTree.setModel(model); } I've updated the code to correspond to the accepted answer. The JTree now updates correctly based on which file I select in the list.

    Read the article

  • intelligent path truncation/ellipsis for display

    - by peterchen
    I am looking for an existign path truncation algorithm (similar to what the Win32 static control does with SS_PATHELLIPSIS) for a set of paths that should focus on the distinct elements. For example, if my paths are like this: Unit with X/Test 3V/ Unit with X/Test 4V/ Unit with X/Test 5V/ Unit without X/Test 3V/ Unit without X/Test 6V/ Unit without X/2nd Test 6V/ When not enough display space is available, they should be truncated to something like this: ...with X/...3V/ ...with X/...4V/ ...with X/...5V/ ...without X/...3V/ ...without X/...6V/ ...without X/2nd ...6V/ (Assuming that an ellipsis generally is shorter than three letters). This is just an example of a rather simple, ideal case (e.g. they'd all end up at different lengths now, and I wouldn't know how to create a good suggestion when a path "Thingie/Long Test/" is added to the pool). There is no given structure of the path elements, they are assigned by the user, but often items will have similar segments. It should work for proportional fonts, so the algorithm should take a measure function (and not call it to heavily) or generate a suggestion list. Data-wise, a typical use case would contain 2..4 path segments anf 20 elements per segment. I am looking for previous attempts into that direction, and if that's solvable wiht sensible amount of code or dependencies.

    Read the article

  • Can SOTI's MobiControl software interfere with an ASP.Net web service?

    - by MusiGenesis
    We have a set of WinMo (5.0) devices running a .NET CF application that talks to an ASP.Net web service running on a server. The devices connect to the network either via ActiveSync through a networked PC or directly to the network via an Ethernet dongle. In our development environment, the communication between devices and web service is 100% reliable. In our production environment, the communications are failing erratically and unpredictably. Sometimes calls to the web service (even to a simple test call that just returns a boolean) begin failing every time on a particular device, with the error message "Could not establish connection to network." This is usually fixed by flip-flopping the selected combo box values on the SETTINGS | NETWORKS screen. Sometimes calls on a particular device begin failing with a generic "WebException" message. The fix for this problem (so far) is either to reset the device (i.e. reinstall the OS) or else it just can't be fixed on some devices. To the best of our knowledge, everything about the DEV and PROD systems are the same (same server and device specs). The most obvious difference to us is that the PROD devices are all controlled by SOTI's MobiControl (which is server-side software that communicates with a SOTI client application installed on each device), whereas our DEV environment does not have SOTI installed anywhere (obviously we should have it there as well - long story). Does anybody have any experience with SOTI MobiControl and/or know of any documented problems where SOTI interferes with other communication mechanisms on a device?

    Read the article

  • Issue intercepting property in Silverlight application

    - by joblot
    I am using Ninject as DI container in a Silverlight application. Now I am extending the application to support interception and started integrating DynamicProxy2 extension for Ninject. I am trying to intercept call to properties on a ViewModel and ending up getting following exception: “Attempt to access the method failed: System.Reflection.Emit.DynamicMethod..ctor(System.String, System.Type, System.Type[], System.Reflection.Module, Boolean)” This exception is thrown when invocation.Proceed() method is called. I tried two implementations of the interceptor and they both fail public class NotifyPropertyChangedInterceptor: SimpleInterceptor { protected override void AfterInvoke(IInvocation invocation) { var model = (IAutoNotifyPropertyChanged)invocation.Request.Proxy; model.OnPropertyChanged(invocation.Request.Method.Name.Substring("set_".Length)); } } public class NotifyPropertyChangedInterceptor: IInterceptor { public void Intercept(IInvocation invocation) { invocation.Proceed(); var model = (IAutoNotifyPropertyChanged)invocation.Request.Proxy; model.OnPropertyChanged(invocation.Request.Method.Name.Substring("set_".Length)); } } I want to call OnPropertyChanged method on the ViewModel when property value is set. I am using Attribute based interception. [AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)] public class NotifyPropertyChangedAttribute : InterceptAttribute { public override IInterceptor CreateInterceptor(IProxyRequest request) { if(request.Method.Name.StartsWith("set_")) return request.Context.Kernel.Get<NotifyPropertyChangedInterceptor>(); return null; } } I tested the implementation with a Console Application and it works alright. I also noted in Console Application as long as I had Ninject.Extensions.Interception.DynamicProxy2.dll in same folder as Ninject.dll I did not have to explicitly load DynamicProxy2Module into the Kernel, where as I had to explicitly load it for Silverlight application as follows: IKernel kernel = new StandardKernel(new DIModules(), new DynamicProxy2Module()); Could someone please help? Thanks

    Read the article

  • Selectively suppress XML comments?

    - by Mike Post
    We deliver a number of assemblies to external customers, but not all of the public APIs are officially supported. For example, due to less than optimal design choices sometimes a type must be publicly exposed from an assembly for the rest of our code to work, but we don't want customers to use that type. One part of communicating the lack of support is not provide any intellisense in the form of XML comments. Is there a way to selectively suppress XML comments? I'm looking for something other than ignoring warning 1591 since it's a long term maintenance issue. Example: I have an assembly with public classes A and B. A is officially supported and should have XML documentation. B is not intended for external use and should not be documented. I could turn on XML documentation and then suppress warning 1591. But when I later add the officially supported class C, I want the compiler to tell me that I've screwed up and failed to add the XML documentation. This wouldn't occur if I had suppressed 1591 at the project level. I suppose I could #pragma across entire classes, but it seems like there should be a better way to do this.

    Read the article

  • Help me with DB design

    - by eugeneK
    Hi, i'm developing text ads system. Some small clone of Google Ads. Here is diagram with common tables. Basically make it short, advertiser can have up to 10 variant of same campaign with different text variations, can geo-target his ads and unique impressions count only for IP that haven't been on certain site for more than 24 hours. Pretty simple but the question is what i lack in here from your experience because later it would much harder to fix design flaws and some of you probably done something alike also many SQL gurus in here so maybe i did over normalized DB or did not normalized as needed ? Second question is. My end goal is to get ads for user from ie. Germany that haven't seen same ad on same site for 24 hours as long as ads fit country of user. Each impression is count same as each click if there is one. I need to get 5 "random" ads based on IP, Country and higher CPC (pay per click). How can i achieve this with current design or maybe to design database the way it would be easy to get ads and show stats for advirtisers... thanks for any help...

    Read the article

  • Dynamically created iframe used to download file triggers onload with firebug but not without

    - by justkt
    EDIT: as this problem is now "solved" to the point of working, I am looking to have the information on why. For the fix, see my comment below. I have an web application which repeatedly downloads wav files dynamically (after a timeout or as instructed by the user) into an iframe in order to trigger the a default audio player to play them. The application targets only FF 2 or 3. In order to determine when the file is downloaded completely, I am hoping to use the window.onload handler for the iframe. Based on this stackoverflow.com answer I am creating a new iframe each time. As long as firebug is enabled on the browser using the application, everything works great. Without firebug, the onload never fires. The version of firebug is 1.3.1, while I've tested Firefox 2.0.0.19 and 3.0.7. Any ideas how I can get the onload from the iframe to reliably trigger when the wav file has downloaded? Or is there another way to signal the completion of the download? Here's the pertinent code: HTML (hidden's only attribute is display:none;): <div id="audioContainer" class="hidden"> </div> JavaScript (could also use jQuery, but innerHTML is faster than html() from what I've read): waitingForFile = true; // (declared at the beginning of closure) $("#loading").removeClass("hidden"); var content = "<iframe id='audioPlayer' name='audioPlayer' src='" + /path/to/file.wav + "' onload='notifyLoaded()'></iframe>"; document.getElementById("audioContainer").innerHTML = content; And the content of notifyLoaded: function notifyLoaded() { waitingForFile = false; // (declared at beginning of the closure) $("#loading").addClass("hidden"); } I have also tried creating the iframe via document.createElement, but I found the same behavior. The onload triggered each time with firebug enabled and never without it. EDIT: Fixed the information on how the iframe is being declared and added the callback function code. No, no console.log calls here.

    Read the article

  • Connecting repeatable and non-repeatble backgrounds without a seam

    - by Stiggler
    I have a 700x300 background repeating seamlessly inside of the main content-div. Now I'd like to attach a div at the bottom of the content-div, containing a different background image that isn't repeatable, connecting seamlessly with the repeatable background above it. Essentially, the non-repeatable image will look like the end piece of the repeatable image. Due to the nature of the pattern, unless the full 300px height of the background image is visible in the last repeat of the content-div's backround, the background in the div below won't seamlessly connect. Basically, I need the content div's height to be a multiple of 300px under all circumstances. What's a good approach to this sort of problem? I've tried resizing the content-div on loading the page, but this only works as long as the content div doesn't contain any resizing, dynamic content, which is not my case: function adjustContentHeight() { // Setting content div's height to nearest upper multiple of column backgrounds height, // forcing it not to be cut-off when repeated. var contentBgHeight = 300; var contentHeight = $("#content").height(); var adjustedHeight = Math.ceil(contentHeight / contentBgHeight); $("#content").height(adjustedHeight * contentBgHeight); } $(document).ready(adjustContentHeight); What I'm looking for there is a way to respond to a div resizing event, but there doesn't seem to be such a thing. Also, please assume I have no access to the JS controlling the resizing of content in the content-div, though this is potentially a way of solving the problem. Another potential solution I was thinking off was to offset the background image in the bottom div by a certain amount depending on the height of the content-div. Again, the missing piece seems to be the ability to respond to a resize event.

    Read the article

  • Opengl-es draw an .obj file, but how?

    - by lacas
    I d like to parse an .obj file. My parser is working good, but my displaying is not good. Obj file is here my code is: public ObjModelParser parse() { long startTime = System.currentTimeMillis(); InputStream fileIn = resources.openRawResource(resourceID); BufferedReader buffer = new BufferedReader(new InputStreamReader(fileIn)); String line=""; Log.e("model loader", "Start parsing object " + resourceID); try { while ((line = buffer.readLine()) != null) { StringTokenizer parts = new StringTokenizer(line, " "); int numTokens = parts.countTokens(); if (numTokens == 0) continue; String part = parts.nextToken(); if (part.equals(VERTEX)) { Log.e("v ", line); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); vertices.add(Float.parseFloat(parts.nextToken())); .... and my displaying code is: draw that model with TRIANGLE_STRIP and gl.glDrawArrays(rendermode, 0, coords.length/dimension); What is the mistake here? edited: file here to show what is my good coords from my program for a cube, and what is from .obj file, that never show Thanks, Leslie

    Read the article

  • Are Large iPhone Ping Times Indicative of Application Latency?

    - by yar
    I am contemplating creating a realtime app where an iPod Touch/iPhone/iPad talks to a server-side component (which produces MIDI, and sends it onward within the host). When I ping my iPod Touch on Wifi I get huge latency (and a enormous variance, too): 64 bytes from 192.168.1.3: icmp_seq=9 ttl=64 time=38.616 ms 64 bytes from 192.168.1.3: icmp_seq=10 ttl=64 time=61.795 ms 64 bytes from 192.168.1.3: icmp_seq=11 ttl=64 time=85.162 ms 64 bytes from 192.168.1.3: icmp_seq=12 ttl=64 time=109.956 ms 64 bytes from 192.168.1.3: icmp_seq=13 ttl=64 time=31.452 ms 64 bytes from 192.168.1.3: icmp_seq=14 ttl=64 time=55.187 ms 64 bytes from 192.168.1.3: icmp_seq=15 ttl=64 time=78.531 ms 64 bytes from 192.168.1.3: icmp_seq=16 ttl=64 time=102.342 ms 64 bytes from 192.168.1.3: icmp_seq=17 ttl=64 time=25.249 ms Even if this is double what the iPhone-Host or Host-iPhone time would be, 15ms+ is too long for the app I'm considering. Is there any faster way around this (e.g., USB cable)? If not, would building the app on Android offer any other options? Traceroute reports more workable times: traceroute to 192.168.1.3 (192.168.1.3), 64 hops max, 52 byte packets 1 192.168.1.3 (192.168.1.3) 4.662 ms 3.182 ms 3.034 ms can anyone decipher this difference between ping and traceroute for me, and what they might mean for an application that needs to talk to (and from) a host?

    Read the article

  • Connecting to a network drive programmatically and caching credentials

    - by Chris Doggett
    I'm finally set up to be able to work from home via VPN (using Shrew as a client), and I only have one annoyance. We use some batch files to upload config files to a network drive. Works fine from work, and from my team lead's laptop, but both of those machines are on the domain. My home system is not, and won't be, so when I run the batch file, I get a ton of "invalid drive" errors because I'm not a domain user. The solution I've found so far is to make a batch file with the following: explorer \\MACHINE1 explorer \\MACHINE2 explorer \\MACHINE3 Then manually login to each machine using my domain credentials as they pop up. Unfortunately, there are around 10 machines I may need to use, and it's a pain to keep entering the password if I missed one that a batch file requires. I'm looking into using the answer to this question to make a little C# app that'll take the login info once and login programmatically. Will the authentication be shared automatically with Explorer, or is there anything special I need to do? If it does work, how long are the credentials cached? Is there an app that does something like this automatically? Unfortunately, domain authentication via the VPN isn't an option, according to our admin. EDIT: If there's a way to pass login info to Explorer via the command line, that would be even easier using Ruby and highline.

    Read the article

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • perl dynamic path given to 'use lib'

    - by Ed Hyer
    So, my code (Perl scripts and Perl modules) sits in a tree like this: trunk/ util/ process/ scripts/ The 'util' directory has, well, utilities, that things in the 'process/' dir need. They get access like this: use FindBin; use lib "$FindBin::Bin/../util"; use UtilityModule qw(all); That construct doesn't care where you start, as long as you're at the same level in the tree as "util/". But I decided that 'scripts/' was getting too crowded, so I created scripts/scripts1 scripts/scripts2 Now I see that this doesn't work. If I run a script 'trunk/scripts/scripts1/call_script.pl', and it calls '/trunk/process/process_script.pl', then 'process_script.pl' will fail trying to get the routines from UtilityModule(), because the path that FindBin returns is the path of the top-level calling script. The first ten ways I thought of to solve this all involved something like: use lib $path_that_came_from_elsewhere; but that seems to be something Perl doesn't like to do, except via that FindBin trick. I tried some things involving BEGIN{} blocks, but i don't really know what I'm doing there, and will likely just end up refactoring. But if someone has some clever insight into this type of problem, this would be a good chance to earn some points!

    Read the article

  • Bash Templating: How to build configuration files from templates with Bash?

    - by FractalizeR
    Hello. I'm writting a script to automate creating configuration files for Apache and PHP for my own webserver. I don't want to use any GUIs like CPanel or ISPConfig. I have some templates of Apache and PHP configuration files. Bash script needs to read templates, make variable substitution and output parsed templates into some folder. What is the best way to do that? I can think of several ways. Which one is the best or may be there are some better ways to do that? I want to do that in pure Bash (it's easy in PHP for example) 1)http://stackoverflow.com/questions/415677/how-to-repace-variables-in-a-nix-text-file template.txt: the number is ${i} the word is ${word} script.sh: #!/bin/sh #set variables i=1 word="dog" #read in template one line at the time, and replace variables #(more natural (and efficient) way, thanks to Jonathan Leffler) while read line do eval echo "$line" done < "./template.txt" BTW, how do I redirect output to external file here? Do I need to escape something if variables contain, say, quotes? 2) Using cat & sed for replacing each variable with it's value: Given template.txt: The number is ${i} The word is ${word} Command: cat template.txt | sed -e "s/\${i}/1/" | sed -e "s/\${word}/dog/" Seems bad to me because of the need to escape many different symbols and with many variables the line will be tooooo long. Can you think of some other elegant and safe solution?

    Read the article

  • What is the best way to do testing database (MYSQL spesific)

    - by justjoe
    Right now i'm on testing something in a database. It's a wordpress database. i have to write and delete and do other operation on it. As you know it, it has indexing mechanism that will always make every new post inherit the next highest possible ID. Please consider that this database is a copying of used database. it has been written before. So, i will need to make sure when i finish my testing, it will be the same Right now, my only solution is making backup. So if i have end in some section of planned testing, i will backup it and start next testing on another copy of it. Fortunately, the size of database is only a small one. so delete and copy and backup it will be easy. but i know this way of database testing is only partial solution.It force me to create too many backup copy. I don't know what i will do if the database has bigger size. it will be a very long of testing nightmare. so i wonder is there any solution that work just like rollback. So it will just lock the database and just put new entry as some kind of cache. I can erase it or write it into the database. i use mysql and phpmyadmin and use it to developed some custom solution. EDIT ::: How to effectively doing testing on database when developing PHP solution ?

    Read the article

  • Finding Common Phrases in SQL Server TEXT Column

    - by regex
    Short Desc: I'm curious to see if I can use SQL Analysis services or some other SQL Server service to mine some data for me that will show commonalities between SQL TEXT fields in a dataset. Long Desc I am looking at a subset of data that consists of about 10,000 rows of TEXT blobs which are used as a notes column in a issue tracking (ticketing) software. I would like to use something out of the box (without having to build something) that might be able to parse through all of the rows and find commonly used byte sequences in the "Notes" column. In other words, I want to find commonly used phrases (two to three word phrases, so 9 - 20 character sections of the TEXT blob). This will help me better determine if associate's notes contain similar phrases (troubleshooting techniques) that we could standardize in our troubleshooting process flow. Closing Note I'd really rather not build an application to do this as my method will probably not be the most efficient way to do it. Hopefully all this makes sense. Please let me know in the comments if anything needs clarification.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >