Search Results

Search found 21454 results on 859 pages for 'via'.

Page 228/859 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Servlet receives null from Remote EJB3 Session Bean

    - by Hank
    I'm sure this is a beginner error... So I have a JEE6 application with entities, facades (implementing the persistence layer) and Stateless Session Beans (EJB3) with Remote interfaces (providing access to the entities via facades). This is working fine. Via the SLSB I can retrieve and manipulate entities. Now, I'm trying to do this from a Web Application (deployed on the same Glassfish, entity+interface definitions from JEE app imported as separate jar). I have a Servlet, that receives an instance of the SLSB injected. I get it to retrieve an entity, and the following happens (I can see it in the logs): the remote SLSB gets instantiated, its method called SLSB instantiates the facade, calls the 'get' method facade retrieves object from DB, returns it SLSB returns the object to the caller (all is good until here) calling servlet receives .. null !! What is going wrong? This should work, right? MyServlet: public class MyServlet extends HttpServlet { @EJB private CampaignControllerRemote campaignController; // remote SLSB protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/plain"); PrintWriter out = response.getWriter(); try { Campaign c = campaignController.getCampaign(5L); // id of an existing campaign out.println("Got "+ c.getId()); // c is null !! } finally { out.close(); } } ... } Pls let me know if you want to see other code, and I'll update the post.

    Read the article

  • Not able to use 7-Zip to compress stdin and output with stdout?

    - by acidzombie24
    I get the error "Not implemented". I want to compress a file using 7-Zip via stdin then take the data via stdout and do more conversions with my application. In the man page it shows this example: % echo foo | 7z a dummy -tgzip -si -so /dev/null I am using Windows and C#. Results: 7-Zip 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Creating archive StdOut System error: Not implemented Code: public static byte[] a7zipBuf(byte[] b) { string line; var p = new Process(); line = string.Format("a dummy -t7z -si -so "); p.StartInfo.Arguments = line; p.StartInfo.FileName = @"C:\Program Files\7-Zip\7z.exe"; p.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; p.StartInfo.CreateNoWindow = true; p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.RedirectStandardError = true; p.StartInfo.RedirectStandardInput = true; p.Start(); p.StandardInput.BaseStream.Write(b, 0, b.Length); p.StandardInput.Close(); Console.Write(p.StandardError.ReadToEnd()); //Console.Write(p.StandardOutput.ReadToEnd()); return p.StandardOutput.BaseStream.ReadFully(); } Is there another simple way to read the file into memory? Right now I can 1) write to a temporary file and read (easy and can copy/paste some code) 2) use a file pipe (medium? I have never done it) 3) Something else.

    Read the article

  • Using MSADO15.DLL and C++ with MinGW/GCC on Windows Vista

    - by Eugen Mihailescu
    INTRODUCTION Hi, I am very new to C++, is my 1st statement. I have started initially with VC++ 2008 Express, I've notice that GCC becomes kind of standard so I am trying to make the right steps event from the beginning. I have written a piece of code that connects to MSSQL Server via ADO, on VC++ it's working like a charm by importing MSADO15.dll: #import "msado15.dll" no_namespace rename("EOF", "EndOfFile") Because I am going to move from VC++ I was looking for an alternative (eventually multi-platform) IDE, so I stick (for this time) with Code::Block (I'm using last nightly buil, SVN 6181). As compiler I choose to use GCC 3.4.5 (ported via MinGW 5.1.6), under Vista. I was trying to compile a simple "hello world" application with GCC that use/import the same msado15.dll (#import "c:\Program Files\Common Files\System\ADO\msado15.dll" no_namespace rename("EOF", "EndOfFile")) and I was surprised to see a lot of compile-time errors. I was expected that the #import compiler's directive will generate a library from "msado15.dll" so it can link to it later (link-edit time or whatever). Instead it was trying to read it as a normal file (like a header file,if you like) because it was trying to interprete each line in the DLL (which has a MZ signature): Example: Compiling: main.cpp E:\MyPath\main.cpp:2:64: warning: extra tokens at end of #import directive In file included from E:\MyPath\main.cpp:2: c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\144' in program In file included from E:\MyPath\main.cpp:2: c:\Program Files\Common Files\System\ADO\msado15.dll:1:4: warning: null character(s) ignored c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\3' in program c:\Program Files\Common Files\System\ADO\msado15.dll:1:6: warning: null character(s) ignored c:\Program Files\Common Files\System\ADO\msado15.dll:1: error: stray '\4' in program ... and so on. MY QUESTION Well, it is obvious that under this version of GCC the #import directive does not do the expected job (perhaps #import is not supported anymore by GCC), so finally my question: how to use the ADO to access MSSQL database on a C++ program compiled with GCC (v3.4.5)?

    Read the article

  • How to set up RPX widget and facebook app to be able to authenticate with rpx_now?

    - by Andrei
    Using the sample app for rpx_now gem ( http://github.com/grosser/rpx_now_example) on localhost:3000, I have successfully logged in via Google Accounts, myOpenID, Yahoo, but cannot make it via Facebook. In the RPX app/widget settings I have set my facebook-app key and secret. In my facebook app settings, the Connect URL is myappname.rpxnow.com. But when I try to connect, then I don't even see a facebook login page, just a number of redirects and I am back to my localhost with the following exception http://gist.github.com/386520 . Before I was successfully connecting with oauth2 gem, however, without fetching user data - only authentication. That time I set only key/secret and localhost as my Connect URL. Currently, I don't even ask for email etc., but still the same problem. Can it happen because rpx_now cannot get requested user data from facebook? Or it is a problem of facebook key/secret? May be I need to provide more settings of my facebook app? RPXNow::ApiError in UsersController#create Got error: Invalid parameter: token (code: 1), HTTP status: 200 RAILS_ROOT: /home/Andrei/rpx_now_example Application Trace | Framework Trace | Full Trace /usr/lib/ruby/gems/1.8/gems/rpx_now-0.6.20/lib/rpx_now/api.rb:71:in `parse_response' /usr/lib/ruby/gems/1.8/gems/rpx_now-0.6.20/lib/rpx_now/api.rb:21:in `call' /usr/lib/ruby/gems/1.8/gems/rpx_now-0.6.20/lib/rpx_now.rb:23:in `user_data' /home/Andrei/rpx_now_example/app/controllers/users_controller.rb:16:in `create' Request Parameters: None Show session dump Response Headers: {"Content-Type"="", "Cache-Control"="no-cache"}

    Read the article

  • Windows-Mobile Directshow: Specifying bitrate/quality of a WMV video capture

    - by Landstander
    Hi- I'm stumped on this, and I'm really hoping someone could point me in the right direction. I'm currently capturing video in Windows Mobile and encoding it using the WMV 9 DMO (CLSID_CWMV9EncMediaObject). That all works well enough, but the output video's bitrate is too high, resulting in a video file that's much too large for my needs. Ultimately, my goal is to mimic the video settings that Microsoft's Camera Capture Dialog outputs in the "messaging" quality mode (64kbps) from my C++ code. Currently, my code's outputting a WMV file with a bitrate of 352kbps. The only example I could find of specifying the capture bitrate with a WMV9 DMO was this. The idea in that code was basically to use a propertybag to write a bitrate to a property of the DMO. Update: In windows mobile, the closest codec property I can find that seems to equate to the bitrate is "g_wszWMVCVBRQuality". Microsoft's documentation of this property is extremely confusing to me: It basically seems to say that a higher number equates to a higher quality, but it gives absolutely no explanation of the specifics for each number. When I attempt to set this property to value like "1" via a propertybag for the WMV9 DMO, I run into a -2147467259 (unknown) error. To summarize: What is the basic strategy to specify the bitrate/quality of a video being captured via directshow (wmv9) on a windows mobile platform? I've heard (or wondered about) the following methods: Use the propertybag to change the encoder DMO's property that corresponds to bitrate/quality (currently failing) Create your own custom transcoder/encoder to specify it. This seems unnecessary since the WMV encoder works well enough- it's just at too high a bitrate. The VIDEOINFOHEADER has a bitrate property, but I suspect that specifying new settings here will do nothing to alter the actual encoding process since I wouldn't think file attributes would come into play until after the encoding. Any suggestions? PS: I would post specific source code, but at this point it may confuse more than it helps since I'm floundering so much on how to do this. At this point, I'm just trying to validate the general strategy. THANKS!

    Read the article

  • Making an AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request during an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • Jackson object mapping - map incoming JSON field to protected property in base class

    - by Pete
    We use Jersey/Jackson for our REST application. Incoming JSON strings get mapped to the @Entity objects in the backend by Jackson to be persisted. The problem arises from the base class that we use for all entities. It has a protected id property, which we want to exchange via REST as well so that when we send an object that has dependencies, hibernate will automatically fetch these dependencies by their ids. Howevery, Jackson does not access the setter, even if we override it in the subclass to be public. We also tried using @JsonSetter but to no avail. Probably Jackson just looks at the base class and sees ID is not accessible so it skips setting it... @MappedSuperclass public abstract class AbstractPersistable<PK extends Serializable> implements Persistable<PK> { @Id @GeneratedValue(strategy = GenerationType.AUTO) private PK id; public PK getId() { return id; } protected void setId(final PK id) { this.id = id; } Subclasses: public class A extends AbstractPersistable<Long> { private String name; } public class B extends AbstractPersistable<Long> { private A a; private int value; // getter, setter // make base class setter accessible @Override @JsonSetter("id") public void setId(Long id) { super.setId(id); } } Now if there are some As in our database and we want to create a new B via the REST resource: @POST @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) @Transactional public Response create(B b) { if (b.getA().getId() == null) cry(); } with a JSON String like this {"a":{"id":"1","name":"foo"},"value":"123"}. The incoming B will have the A reference but without an ID. Is there any way to tell Jackson to either ignore the base class setter or tell it to use the subclass setter instead? I've just found out about @JsonTypeInfo but I'm not sure this is what I need or how to use it. Thanks for any help!

    Read the article

  • Can I make a "TCP packet modifier" using tun/tap and raw sockets?

    - by benhoyt
    I have a Linux application that talks TCP, and to help with analysis and statistics, I'd like to modify the data in some of the TCP packets that it sends out. I'd prefer to do this without hacking the Linux TCP stack. The idea I have so far is to make a bridge which acts as a "TCP packet modifier". My idea is to connect to the application via a tun/tap device on one side of the bridge, and to the network card via raw sockets on the other side of the bridge. My concern is that when you open a raw socket it still sends packets up to Linux's TCP stack, and so I couldn't modify them and send them on even if I wanted to. Is this correct? A pseudo-C-code sketch of the bridge looks like: tap_fd = open_tap_device("/dev/net/tun"); raw_fd = open_raw_socket(); for (;;) { select(fds = [tap_fd, raw_fd]); if (FD_ISSET(tap_fd, &fds)) { read_packet(tap_fd); modify_packet_if_needed(); write_packet(raw_fd); } if (FD_ISSET(raw_fd, &fds)) { read_packet(raw_fd); modify_packet_if_needed(); write_packet(tap_fd); } } Does this look possible, or are there other better ways of achieving the same thing? (TCP packet bridging and modification.)

    Read the article

  • Problem with non blocking fifo in bash

    - by timdel
    Hi! I'm running a few Team Fortress 2 servers and I want to write a little management script. Basically the TF2 servers are a fg process which provides a server console, so I can start the server, type status and get an answer from it: ***@purple:~/tf2$ ./start_server_testing Auto detecting CPU Using AMD Optimised binary. Server will auto-restart if there is a crash. Console initialized. [bla bla bla] Connection to Steam servers successful. VAC secure mode is activated. status hostname: Team Fortress version : 1.0.6.1/15 3883 secure udp/ip : ***.***.133.31:27600 map : ctf_2fort at: 0 x, 0 y, 0 z players : 0 (2 max) # userid name uniqueid connected ping loss state adr Great, now I want to create a script which sends the command sm_reloadadmins to all my servers. The best way I found to do this is using a fifo named pipe. Now what I want to do is having this pipe readonly and non blocking to the server process, so I can write into the pipe and the server executes it, but still I want to write via console one the server, so if I switch back to the fg process of the server and I type status I want an answer printed. I tried this (assuming serverfifo is mkfifo serverfifo): ./start_server_testing < serverfifo Not working, the server won't start until something is written to the pipe. ./start_server_testing <> serverfifo Thats actually working pretty good, I can see the console output of the server and I can write to the fifo and the server executes the commands, but I can't write via console to the server anymore. Also, if I write 'exit' to the pipe (which should end the server) and I'm running it in a screen the screen window is getting killed for some reason (wtf why?). I only need the server to read the fifo without blocking AND all my keyboard input on the server itself should be send to the server AND all server ouput should be written to the console. Is that possible? If yes, how?

    Read the article

  • Making a concurrent AJAX WCF Web Service request during an Async Postback

    - by nekno
    I want to provide status updates during a long-running task on an ASP.NET WebForms page with AJAX. Is there a way to get the ScriptManager to execute and process a script for a web service request concurrently with an async postback? I have a script on the page that makes a web service request. It runs on page load and periodically using setInterval(). It's running correctly before the async postback is initiated, but it stops running during the async postback, and doesn't run again until after the async postback completes. I have an UpdatePanel with a button to trigger an async postback, which executes the long-running task. I also have an instance of an AJAX WCF Web service that is working correctly to fetch data and present it on the page but, like I said, it doesn't fetch and present the data until after the async postback completes. During the async postback, the long-running task sends updates from the page to the web service. The problem is that I can debug and step through the web service and see that the status updates are correctly set, but the updates aren't retrieved by the client script until the async postback completes. It seems the Script Manager is busy executing the async postback, so it doesn't run my other JavaScript via setInterval() until the postback completes. Is there a way to get the Script Manager, or otherwise, to run the script to fetch data from the WCF web service during the async postback? I've tried various methods of using the PageRequestManager to run the script on the client-side BeginRequest event for the async postback, but it runs the script, then stops processing the code that should be running via setInterval() while the page request executes.

    Read the article

  • MacOSX: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • popViewController does not autorotate back to allowed orientation

    - by JoeGaggler
    I have two UIViewControllers, "A" and "B", where "A" overrides the shouldAutorotateToInterfaceOrientation to return YES for UIInterfaceOrientationPortrait, and "B" returns YES for all orientations. In my example "A" is the root navigation view controller, and I then use pushViewController for "B". After that I rotate the device into landscape, which successfully autorotates "B", then I pop "B" (back button or via popViewController) to return to "A". When targetting iPhone OS 3.1.3, "A" returns to the portrait orientation as expected. When targetting iPhone OS 3.2, I have two side-effects: "A" is displayed in landscape. The navigation bar does not update even though "A" is now displayed. The navigation bar still shows the items for "B". Only after trying to go back/pop one more time will the navigation bar animate to show the items for "A". If I instead attempt to push "B" again and go back, I have to pop twice before the navigation bar animates to show the items for "A". During these "intermediate pops" the view for "A" remains displayed. While researching this issue, I have seen other answers suggesting performing the rotation manually ([UIDevice setOrientation] or via a tranformation), however this does not help understand what the problem is, especially why it behaves differently between the two OS's. So my question is: must all of my UIViewControllers on the UINavigationController stack support exactly the same orientations going forward? And if not, then is there something that I need to do to make it behave as it did for OS 3.1.3?

    Read the article

  • Running job in the background from Perl WITHOUT waiting for return

    - by Rafael Almeida
    The Disclaimer First of all, I know this question (or close variations) have been asked a thousand times. I really spent a few hours looking in the obvious and the not-so-obvious places, but there may be something small I'm missing. The Context Let me define the problem more clearly: I'm writing a newsletter app in which I want the actual sending process to be async. As in, user clicks "send", request returns immediately and then they can check the progress in a specific page (via AJAX, for example). It's written in your traditional LAMP stack. In the particular host I'm using, PHP's exec() and system() are disabled for security reasons, but Perl's system functions (exec, system and backticks) aren't. So my workaround solution was to create a "trigger" script in Perl that calls the actual sender via the PHP CLI, and redirects to the progress page. Where I'm Stuck The very line the calls the sender is, as of now: system("php -q sender.php &"); Problem being, it's not returning immediately, but waiting for the script to finish. I want it to run in the background but the system call itself returns right away. I also tried running a similar script in my Linux terminal, and in fact the prompt doesn't show until after the script has finished, even though my test output doesn't run, indicating it's really running in the background. What I already tried Perl's exec() function - same result of system(). Changing the command to: "php -q sender.php | at now"), hoping that the "at" daemon would return and that the PHP process itself wouldn't be attached to Perl. What should I try now?

    Read the article

  • Source Control - XCode - Visual Studio 2005/2008 / 2010

    - by Mick Walker
    My apologies if this has been asked before, I wasnt quite sure if this question should be asked on a programming forum, as it more relates to programming environment than a particular technology, so please accept my (double) appologies if I am posting this in the wrong place, my logic in this case was if it effects the code I write, then this is the place for it. At home, I do a lot of my development on a Mac Pro, I do development for the Mac, iPhone and Windows on this machine (Xcode & Visual Studio - (multiple versions installed in bootcamp, but generally I run it via Parallels)). When visiting a client, I have a similar setup, but on my MacBook Pro. What I want is a source control solution to install on the Mac Pro, that will support both XCode and multiple versions of visual studio, so that when I visit a client, I can simply grab the latest copy from source control via the MacBook Pro. Whilst visiting the client, he / she may suggest changes, and minor ones I would tend to make on site, so I need the ability to merge any modified code back into the trunk of the project / solution when I return home. At the moment, I am using no source control at all, and rely on simply coping folders and overwriting them when I return from a client- thats my 'merge'!!! I was wondering if anyone had any ideas of a source provider I could use, which would support both Windows and Mac development environments, and is cheap (free would be better).

    Read the article

  • Using PHP cURL with an HTTP Debugging Proxy

    - by Kane
    I'm using the app "Fiddler" to debug a GET attempt to a website via PHP cURL. In order to see the cURL traffic I had to specify that the cURL connection use the Fiddler proxy (see code below). $ch = curl_init(); curl_setopt($ch, CURLOPT_HTTPPROXYTUNNEL, 1); curl_setopt($ch, CURLOPT_PROXY, '127.0.0.1:8888'); curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_HEADERFUNCTION, 'read_header'); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_USERAGENT, $user_agent); curl_setopt($ch, CURLOPT_REFERER, "http://domain.com"); curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); curl_setopt($ch, CURLOPT_COOKIEJAR, "my_cookies.txt"); curl_setopt($ch, CURLOPT_COOKIEFILE, "my_cookies.txt"); curl_setopt($ch, CURLOPT_URL, "http://domain.com"); $response = curl_exec($ch); But the problem is that in Fiddler I can only see this: Request (domain.com is just an alias): CONNECT domain.com:80 HTTP/1.1 Response: HTTP/1.1 200 Blind-Connection Established If I manually load the website in a browser Fiddler gives me WAY more information. I can see the cookies, the header information, and what I'm receiving via the GET. Any ideas why Fiddler can't see more useful information from PHP cURL? Edit: I tried turning on the "Enable HTTPS Decryption" option inside Tools / Fiddler Options / HTTPS (which I'm not sure why I'd need to use as I didn't tell cURL to use HTTPS). Unfortunately, by changing this setting I now get a Response of: HTTP/1.1 502 Connection failed Edit: If it helps, the app "Charles" shows me WAY more information than Fiddler, but I really want to figure out Fiddler since I like it better.

    Read the article

  • Custom SSL handling stopped working on Android 2.2 FroYo

    - by Eric
    For my app, Transdroid, I am connecting to remote servers via HTTP and optionally securely via HTTPS. For these HTTPS connections with the HttpClient I am using a custom SSL socket factory implementation to make sure self-signed certificates are working. Basically, I accept everything and ignore every checking of any certificate. This has been working fine for some time now, but it no longer work for Android 2.2 FroYo. When trying to connect, it will return an exception: java.io.IOException: SSL handshake failure: I/O error during system call, Broken pipe Here is how I initialize the HttpClient: SchemeRegistry registry = new SchemeRegistry(); registry.register(new Scheme("http", new PlainSocketFactory(), 80)); registry.register(new Scheme("https", (trustAll ? new FakeSocketFactory() : SSLSocketFactory.getSocketFactory()), 443)); client = new DefaultHttpClient(new ThreadSafeClientConnManager(httpParams, registry), httpParams); I make use of a FakeSocketFactory and FakeTrustManager, of which the source can be found here: http://code.google.com/p/transdroid/source/browse/#svn/trunk/src/org/transdroid/util Again, I don't understand why it suddenly stopped work, or even what the error 'Broken pipe' means. I have seen messages on Twitter that Seesmic and Twidroid fail with SSL enabled on FroYo as well, but am unsure if it's related. Thanks for any directions/help!

    Read the article

  • ajax delay load UserControl asp.net

    - by user196202
    regarding ajax delay load of usercontrols (or any controls) on Post at Encosia.com : http://encosia.com/2008/02/05/boost-aspnet-performance-with-deferred-content-loading/ I tried to implement it , but I noticed that it can be done only for simple controls or UserControls that Have simple asp.net controls (or html tags) . But when it involved with advanced dynamic ajax control (like ajaxControlToolkit or Telerik controls) that have javascripts inside them This method of injecting the html code to the .InnerHtml property of div tag (for example) IS NOT WORKING , and I red about it that The browser need to load the script on load and after that it won't iterperate the scripts injectd via .InnerHtml. So I attached here example of delay load project (from encosia.com by dave ward) with my modification (look at DefaultPopup.aspx and beforePopup.aspx and AfterPopup.aspx) Which I modified the RssReader to show listview with popup items (which is implemented via ACT HoverMenuExtender ) So in the regular way the popup items are shown right , but on the delay load which is done by creating virtual page for rendering the html and injecting it to .InnerHtml property – This ISN'T WORKING. So my question is : is there a way to do delay loading for controls which include scripts lik ACT and Telerik and others? And for the ajax templates – if I need to inject advanced control to the page – how I do it with your approach? Thanks very much (I can't attach here files so everyone please ask me by mail ([email protected]) and i'll send it to him. ) Zahi Kramer

    Read the article

  • Parsing Line Breaks in PHP/JavaScript

    - by Matt G
    I have a text area in my PHP application where users can enter notes in a project. Sometimes this is displayed on the page via PHP and sometimes it is displayed via javascript. The problem is, if the note is across multiple lines (i.e. the user presses enter while entering notes in the text area), it causes the JS to fail. It's fine when it's being done by the PHP. The line of code in question is: var editnotes='<textarea class="desc_text" style="width:20em;" id="notes_editor"><?php print $notes; ?></textarea>'; So, if the note is over multiple lines, the PHP builds the pager as: var editnotes='<textarea class="desc_text" style="width:20em;" id="notes_editor">This is a test note over multiple lines </textarea>'; And this obviously causes problems for the js. So my question is, what can I do to prevent this? As the code is being built by PHP before it even gets to the browser, I'm thinking that the best approach may be to parse it in the PHP so that the output is something more like this: var editnotes='<textarea class="desc_text" style="width:20em;" id="notes_editor">This is<br/>a test note<br/>over multiple lines<br/></textarea>'; Will this work? How would I do it? Thanks

    Read the article

  • How do I handle a POST request in Perl and FastCGI?

    - by Peterim
    Unfortunately, I'm not familiar with Perl, so asking here. Actually I'm using FCGI with Perl. I need to 1. accept a POST request - 2. send it via POST to another url - 3. get results - 4. return results to the first POST request (4 steps). To accept a POST request (step 1) I use the following code (found it somewhere in the Internet): $ENV{'REQUEST_METHOD'} =~ tr/a-z/A-Z/; if ($ENV{'REQUEST_METHOD'} eq "POST") { read(STDIN, $buffer, $ENV{'CONTENT_LENGTH'}); } else { print ("some error"); } @pairs = split(/&/, $buffer); foreach $pair (@pairs) { ($name, $value) = split(/=/, $pair); $value =~ tr/+/ /; $value =~ s/%(..)/pack("C", hex($1))/eg; $FORM{$name} = $value; } The content of $name (it's a string) is the result of the first step. Now I need to send $name via POST request to some_url (step 2) which returns me another result (step 3), which I have to return as a result to the very first POST request (step 4). Any help with this would be greatly appreciated. Thank you.

    Read the article

  • SVN checkout browser

    - by phazei
    I've been looking all over for a SVN browser. Now I'm not talking about anything like WebSVN or TRAC, I don't want to browse the repository; I want to browse the checkout. I'm looking for a program that lets me browse the checkout (working copy) and shows me the info I'd normally need to SSH for. So I could mark specific files or folders for some commit button, or see the status, or view a diff between the working and a prev version. Basically a web GUI for a svn checkout. A [windows] program that can let you work on a remote checkout as if it were local would also work. Currently I have a checkout on my server running under dev.mysite.com. I log in via ftp and edit and upload the files. I also keep SSH open so I can do a svn st to see what files I've worked on and to commit changes. I want to work on the files on the same environment so I can't simply use a local checkout. But I don't want to need to work via SSH. Are there any apps such as I described? Like a repo browser but for checkouts to do commits. Like WebTortoiseSVN or such. Thanks

    Read the article

  • ASP.Net MVC Json Result: Parameters passed to controller method issue

    - by Moskie
    I'm having a problem getting a controller method that returns a JsonResult to accept parameters passed via the JQuery getJSON method. The code I’m working on works fine when the second parameter ("data") of the getJSON method call is null. But when I attempt to pass in a value there, it seems as if the controller method never even gets called. In this example case, I just want to use an integer. The getJSON call that works fine looks like this: $.getJSON(”/News/ListNewsJson/”, null, ListNews_OnReturn); The controller method is like this: public JsonResult ListNewsJson(int? id) { … return Json(toReturn); } By putting a breakpoint in the ListNewsJson method, I see that this method gets called when the data parameter of getJSON is null, but when I replace it with value, such as, say, 3: $.getJSON(”/News/ListNewsJson/”, 3, ListNews_OnReturn); … the controller method/breakpoint is never hit. Any idea what I'm doing wrong? I should also mention that the controller method works fine if I manually go to the address via my browser ("/News/ListNewsJson/3").

    Read the article

  • Accessing deleted rows from a DataTable

    - by Ken
    Hello: I have a parent WinForm that has a MyDataTable _dt as a member. The MyDataTable type was created in the "typed dataset" designer tool in Visual Studio 2005 (MyDataTable inherits from DataTable) _dt gets populated from a db via ADO.NET. Based on changes from user interaction in the form, I delete a row from the table like so: _dt.FindBySomeKey(_someKey).Delete(); Later on, _dt is passed by value to a dialog form. From there, I need to scan through all the rows to build a string: foreach (myDataTableRow row in _dt) { sbFilter.Append("'" + row.info + "',"); } The problem is that upon doing this after a delete, the following exception is thrown: DeletedRowInaccessibleException: Deleted row information cannot be accessed through the row. The work around that I am currently using (which feels like a hack) is the following: foreach (myDataTableRow row in _dt) { if (row.RowState != DataRowState.Deleted && row.RowState != DataRowState.Detached) { sbFilter.Append("'" + row.info + "',"); } } My question: Is this the proper way to do this? Why would the foreach loop access rows that have been tagged via the Delete() method??

    Read the article

  • Can SOTI's MobiControl software interfere with an ASP.Net web service?

    - by MusiGenesis
    We have a set of WinMo (5.0) devices running a .NET CF application that talks to an ASP.Net web service running on a server. The devices connect to the network either via ActiveSync through a networked PC or directly to the network via an Ethernet dongle. In our development environment, the communication between devices and web service is 100% reliable. In our production environment, the communications are failing erratically and unpredictably. Sometimes calls to the web service (even to a simple test call that just returns a boolean) begin failing every time on a particular device, with the error message "Could not establish connection to network." This is usually fixed by flip-flopping the selected combo box values on the SETTINGS | NETWORKS screen. Sometimes calls on a particular device begin failing with a generic "WebException" message. The fix for this problem (so far) is either to reset the device (i.e. reinstall the OS) or else it just can't be fixed on some devices. To the best of our knowledge, everything about the DEV and PROD systems are the same (same server and device specs). The most obvious difference to us is that the PROD devices are all controlled by SOTI's MobiControl (which is server-side software that communicates with a SOTI client application installed on each device), whereas our DEV environment does not have SOTI installed anywhere (obviously we should have it there as well - long story). Does anybody have any experience with SOTI MobiControl and/or know of any documented problems where SOTI interferes with other communication mechanisms on a device?

    Read the article

  • Access SSAS cube from across domains without direct database connection

    - by SuperKing
    Hello, I'm working with SQL Server Analysis Services for the first time and have the dilemma of working on a project in which users must be able to access SSAS Cubes (via a custom web dashboard) that live across different servers and domains, but without having access to the other server's SSAS database connection strings. So Organization A and Organization B will have their own cubes on their own servers, but Organization A users must be able to view Organization B's cubes, and Organization B users must be able to view Organization A's cubes, but neither organization should have access to the connection string. I've read about allowing HTTP access to the SSAS server and cube from the link below, but that requires setting up users for authentication or allowing anonymous access to one organization's server for users of another organization, and I'm not sure this would be acceptable for this situation, or if this is the preferred way to do this. Is performance acceptable here? http://technet.microsoft.com/en-us/library/cc917711.aspx I also wonder if perhaps it makes sense to run a nightly/weekly process that accesses the other organization's SSAS database via a web service or something, and pull that data into a database on the organization's server, and then rebuild the cube. Then that cube would be queried without having to go and connect to the other organization server when viewing the cube. Has anyone else attempted to accomplish something similar? Is HTTP access the standard way to go for this? Or any other possible options? Thanks, and please let me know if you need more info, still unclear on how some of this works.

    Read the article

  • jQuery ajax form submit - multiple post/get requests caused when validation fails

    - by kenny99
    Hi, I have a problem where if a form submission (set up to submit via AJAX) fails validation, the next time the form is submitted, it doubles the number of post requests - which is definitely not what I want to happen. I'm using the jQuery ValidationEngine plugin to submit forms and bind validation messages to my fields. This is my code below. I think my problem may lie in the fact that I'm retrieving the form action attribute via $('.form_container form').attr('action') - I had to do it this way as using $(this).attr was picking up the very first loaded form's action attr - however, when i was loading new forms into the page it wouldn't pick up the new form's action correctly, until i removed the $(this) selector and referenced it using the class. Can anyone see what I might be doing wrong here? (Note I have 2 similar form handlers which are loaded on domready, that could also be an issue) $('input#eligibility').live("click", function(){ $(".form_container form").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: $('.form_container form').attr('action'), success : function() { var url = $('input.next').attr('rel'); ajaxFormStage(url); }, failure : function() { //unique stuff for this form } }); }); //generic form handler - all form submissions not flagged with the #eligibility id $('input.next:not(#eligibility)').live("click", function(){ $(".form_container form").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: $('.form_container form').attr('action'), success : function() { var url = $('input.next').attr('rel'); ajaxFormStage(url); }, failure : function() { } }); });

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >