Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 351/683 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Alternative Web model

    - by Above The Gods
    One of the problems web apps have against native apps, especially on the mobile front, is the constant need to re-download each web page on request. Ultimately, this leads to slower performance. Why if web apps only download new pages if they're actually needed, not because they're simply requested. For example: perhaps the server can store a web page version in a cookie. Every slight change to the page on the server-side changes the version number. Now instead of the browser requesting a new page each time, why not just check the version number and have the server send the page if they're different? If the page similar, the user can just use a cached page. I'm sure browsers doesn't necessarily have to change to accommodate changes to this, correct?

    Read the article

  • What should I encrypt in Debian during install?

    - by ianfuture
    I have seen various guides and recommendations on web about how best to do this but nothing that clearly explains the best way and why. So I understand there is a need for part of Debian during install to be un-encrypted on its own partition to allow it to boot. Most info I have seen is call this /boot and set the boot flag. Next I believe the best approach is to create another partition out of all the rest of the disk space, encrypt this, then on top of that create a LVM and then within the LVM create my various partitions , name them , select size, and file system type. Can I include /swap in the encrypted LVM part ? Is this approach sound? If so what are the partitions I should use (this is going to be a minimal server install with a view to install as and when what I need for a dev server)? Finally how does the installer know what to put in each partition I define ? I appreciate there are more than one question but any help and suggestions would be appreciated. If further clarification is needed please mention in the comments . EDIT : 16/3/2010 After Richard Holloways reply I thought it relevant to add this info: The reasons why I want to do this are to explore maximising security on any server install and set up, due to interest in the area of Computer Security and Forensics. Also I am trying to peform the task as if it being performed in an enterprise situation. On a technical matter, once set up and configured with minimal packages and ssh this server will not physically be easy to access so I will only be entering via ssh. (Yes I know why encrypt something no one will ever be able to get their hands on? Because I can and I want to is the simple answer, but see above too).

    Read the article

  • Bad links point to old domain - should I disavow on new domain?

    - by user32573
    I am working with a site which we'll call www.newdomain.com, which was hit by Penguin this month despite no unusual practices. I found lots of really spammy links to their old site, www.olddomain.com, which 301s to the new domain. So I've gone through the process of identifying which links are really bad, made contact to ask for removal, and am at the stage of disavowing links. But wait! None of the bad links point to newdomain.com, and I worry that a disavow request via this domain in Webmaster Tools will damage something. Do the old band links affect the new site? If so, where do I disavow those old bad links? On Webmaster Tools for the new domain?

    Read the article

  • How to limit concurrent file access on a Samba share?

    - by JPbuntu
    I have a Ubuntu 12.04 file server running Samba. There are 6 windows machines that access the server, as well as two people that will occasionally access files remotely. The problem that I am having is that the CAD/CAM software we are using doesn't seem to request file locks, meaning if two people open a file at the same time, the first person to close the file will get their changes overwritten if the second person saves the file. I tried changing the smb.conf to strict locking = yes but this doesn't seem to have any effect. File locking with excel seems to work fine, so I know that Samba is using the file locks...if they were put on the file in the first place. Is there a way (either in Samba or Ubuntu) to only allow one user to have a file open at a time? If not does anyone have any suggestions for managing a problem like this?

    Read the article

  • HTTP resource bundling/streaming practice

    - by icelava
    Our SPA (plain HTML and Javascript) makes use of huge volume of javascript and other resources that are downloaded via XHR. Given the sheer number of components and browser simultaneous request limits, we're thinking for ways to deliver our resources in a more efficient manner. A method we're considering is bundling several resources that logically form a coherent group into a single file; thus reducing down to only one XHR (per group). Furthermore to make it more responsive, we'd like to constantly inspect the partial responseText during the LOADING state, determining if a usable chunk (atomic resource) has already been downloaded, and make it available for deserialization/processing even before the XHR is DONE. (a stream-like experience) We're thinking surely somebody else would've considered roughly the same approach before, but haven't really come across any library/framework or container file format that is suitable for our scenario. Anybody else know of something similar?

    Read the article

  • Handle php out of memory error

    - by PeterMmm
    I have a Drupal based web site on a relative small vserver (512MB RAM). Recently the website begins to return php out of memory messages like this: Fatal error: Out of memory (allocated 17039360) (tried to allocate 77824 bytes) in /home/... All php.ini memory limit parameters are set to off (-1). Propably the website has gained of complexity, content, etc. But I cannot interpret fine that message: Does that mean that the whole request has allocated 17MB(?) right now and cannot get 7MB(?) more from the OS. Has the web server spend all memory or has the OS no more memory to allocate ? I'm not shure if the memory overhead is coming from the web server or another service, because when I get the out-of-memoy message I can't get into the server with ssh. After a while all runs fine again.

    Read the article

  • Servlet 3.1 Early Draft Now Available

    - by arungupta
    JSR 340 has released an Early Draft of the Servlet 3.1 specification. Other than the usual clarifications and javadoc updates, ProtocolHandler and WebConnection are new classes that encapsulates the protocol upgrade processing. This will typically be used for upgrading an HTTP connection to a WebSocket. Section 2.3.3.5 in the specification provide more details on it. Section 3.7 explain non-blocking request processing by the Web container. ReadListener and WriteListener are new interfaces that represents a call-back mechanism to read and write data without blocking. As with other Java EE 7 specifications, progress can be tracked at servlet-spec.java.net. The Expert Group discussions are archived and you can participate by sending an email to [email protected].

    Read the article

  • Google Analytics - how to track clicks on a screen?

    - by milesmeow
    Can I track the click of every link, button, dropdown select, etc. on a screen and have it be tracked in Google Analytics? I want to create a page and collect data on which widget the users use most. What about AJAX stuff? What if you're using jQuery or Mootools...can you get the functions to register a fake URL with GA based on user interaction? I use to do this with Flash. Everytime you click a button, it can initiate a fake URL request. I would make urls such as ".../customize/eyes/" or ".../customize/nose", etc. Just wondering if I can do that with Javascript on the page. I've also posted at StackOverflow.

    Read the article

  • Ubuntu 12.04 as router with 2 nic

    - by Blue Gene
    I have been trying this setup for weeks and still can not make this to work... ubuntu 12.04 64 bit with 2 nic nic1: eth0:192.168.2.33 -static ip with internet access (connected to modem) nic2: eth1:192.168.1.2 -static ip connected to LAN. enabled ip_forward on ubuntu box net_ip_forward = 1 on the LAN with ip address 192.168.1.5 specified gateway as 192.168.1.2 and able to ping gateway.But can not ping public address.What am i missing? on router box: route -n Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.2.1 0.0.0.0 UG 100 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 tried ip route add 192.168.2.0/24 via 192.168.1.2 dev eth0 route -n on LAN 192.168.1.5 Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.2 0.0.0.0 UG 100 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.2.0 192.168.1.2 255.255.255.0 UG 0 0 0 eth0 iptables default policy is to accept all. tracepath 8.8.8.8 from LAN 1: 192.168.1.5 0.060ms pmtu 1500 1: 192.168.1.2 3.367ms 1: 192.168.1.2 3.764ms 2: no reply Is there a way to make this work,other than NAT ing.

    Read the article

  • Getting started with SOAP [closed]

    - by EmmyS
    A site I developed has a new requirement to get weather data from the National Weather Service. They have quite a bit of info on how to use SOAP to get their data and display it in the browser, but what we need to do is use a cron job to get the data at specific intervals, then parse the data out into a database. I have no problem writing PHP code that will run an XSLt and parse xml records out into SQL queries, but I have no idea how to handle this with SOAP (which I've never worked with.) Do I get the data via a SOAP request, save it to an XML file on my web server, then run the XSLt against that? Or is there some other way to go about this?

    Read the article

  • eCryptfs : How to keep the home mounted without being over ssh?

    - by Bebeoix
    I have a daemon program who need to read in a file who is saved somewhere in my home folder. But every time I close my ssh connection, this daemon can't read the file because it appear that eCryptfs unmount the home. Maybe there is an option to force eCryptfs to not only mount with an ssh connection ? I didn't found it. Thanks. PS : I know this thread, Why is ecryptfs only mounting private home directory over ssh?, but this is not the proper/good way to deal with the request.

    Read the article

  • The life saver HttpContext.Current.Items["ParameterName"]

    - by MoezMousavi
    I got stocked passing parameter to one master page for some reasons, seems the page lifecycle and dynamic loading of the master pages has got some issues with defining public properties in the masterpage within my project. It did not set my values and as a result, properties became useless. A collegue just mentioned using HttpContext. have a look what  MSDN saying "Encapsulates all HTTP-specific information about an individual HTTP request" http://msdn.microsoft.com/en-us/library/system.web.httpcontext.aspx HttpContext.Current.Items["ParameterName"] Also, Page.Items could do the same thing. Page.Items, "Gets a list of objects stored in the page context" http://msdn.microsoft.com/en-us/library/system.web.ui.page.items.aspx as your master page and content page are rendered as a single document anyway.

    Read the article

  • My cPanel login is being redirected. How can I resolve this?

    - by Suz
    I'm trying to get to cPanel to manage my website. When I type www.mydomain.com:2082 into the browser window, the request seems to be redirected. I made a screen-cast so I could slow down the changes in the address bar. First it seems to go to http://www.mydomain.com:2082/login then http://www.mydomain.com/cgi-sys/login.cgi At this point, a screen briefly appears which says 'Login attempt failed' and then the address is redirected to https://this22.thishost.com:2083/, which is no relation to my site at all. This looks to me like there has been an attack on the system and the login.cgi file is compromised. Any suggestions on how to analyze this further? or fix it? Of course my 'free hosting' isn't any help at all.

    Read the article

  • Can't remove c:/ubuntu folder pleas help?

    - by user98978
    hi today i downloaded ubuntu with wubi-installer on my windows 7 machine but after restarting pc and booting in ubuntu installer my pc shuts down while installing because my laptops battery was gone empty i forgot to plug in my adapter so ubunutu fails to install, then i booted in windows 7 to uninstall ubuntu from add/remove programs en reainstall but the uninstaller did not fully remove the files i still have i folder in my c:/ubuntu when i double click on that folder my explorer.exe stops responding and than i need to open the task manager to kill the explorer.exe procces and restart it and right clicking does the same thing i cant do anyting with c:/ubuntu folder to reinstall ubuntu with wubi i have used unlocker1.9 to remove the folder but i get an error saying Error 0x8007045D: The request could not be performed because of an I/O device error HUH ! please help please :( EDIT: Okay i fixed my problem after following this instruction ! i don't understand why you people did not ask me to do this but it's okay i forgive you my poeple LoL !

    Read the article

  • Get/Post Controller Logic Best Practice

    - by Brian Mains
    In an ASP.NET MVC project (Razor), I have a Get request, which loads two properties on a model, dependent on the property passed into the action method. So if the parameter has a value, the Group property is supplied data. But if not, the Groups collection property is supplied data. In the post action method, when I process the data, to repopulate the view, I have to provide similar logic, and could getaway with returning Action(param) (the get response) to the caller. My question is, based on experience, is that a good practice to get into? I see some downsides to doing that, but adds the lack of code redundancy. Or is there a better alternative?

    Read the article

  • Securing Facebook

    - by Promather
    Probably like most of you, I am concerned about the privacy of Facebook. Some people suggested that I use the HTTPS address instead. Unfortunately, many links in the HTTPS page itself link back to HTTP. So I am wondering whether it is possible in Ubuntu to redirect any request to: http://www.facebook.com/ to https://www.facebook.com/ This way I feel safer. If you also know the solution for Windows, it might be great to share (probably as a comment to my question rather than answer, as this forum is supposed to be for Ubuntu) so that I can share it with friends.

    Read the article

  • DDD: service contains two repository

    - by tikhop
    Does it correct way to have two repository inside one service and will it be an application or domain service? Suppose I have a Passenger object that should contains Passport (government id) object. I am getting Passenger from PassengerRepository. PassengerRepository create request to server and obtain data (json) than parse received data and store inside repository. I have confused because I want to store Passport as Entity and put it to PassportRepository but all information about password contains inside json than i received above. I guess that I should create a PassengerService that will be include PassengerRepository and PassportRepository with several methods like removePassport, addPassport, getAllPassenger and etc. UPDATE: So I guess that the better way is represent Passport as VO and store all passports inside Passenger aggregate. However there is another question: Where I should put the methods (methods calls server api) for management passenger's passport. I think the better place is so within Passenger aggregate.

    Read the article

  • How to implement proper identification and session managent on json post requests?

    - by IBr
    I have some minor messaging connection to server from website via json requests. I have single endpoint which distributes requests according to identification data. I am using asynchronous server and handle data when it comes. Now I am thinking about extending requests with some kind of session. What is the best way to define session? Get cookie when registered and use token as long as session runs with each request? Should I implement timeout for token? Is there alternative methods? Can I cache tokens to same origin requests? What could I use on client side (Web browser)? How about safety? What techniques I should use to throw away requests with malformed data, to big data, without choking server down? Should I worry?

    Read the article

  • Logging communication between two VMs

    - by sYnfo
    Hi, I'm trying to set up "malware lab" described in this paper. So far, I've set up Windows guest system, adding one Host-only Network adapter, and setting this (sorry if the names aren't exactely correct, I don't have an english language version): - IP Address - 10.0.0.3 - Subnet mask - 255.255.255.0 - Default gateway - not set - Preferred DNS - 10.0.0.4 - Alternate DNS - not set And a Linux guest system - Ubuntu 9.04 - with two Network adapters - Bridged (eth0) and Host-only (eth1), and setting eth1 IP Address to 10.0.0.4, leaving the eth0 to be set by DHCP. Then, I have configured iptables as described in the paper, ie.: iptables -F -t nat iptables -F -t mangle iptables -t mangle -P PREROUTING ACCEPT iptables -t mangle -P OUTPUT ACCEPT iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -t nat -P OUTPUT ACCEPT iptables -t mangle -A PREROUTING -i eth0 -j ACCEPT iptables -t mangle -A PREROUTING -p udp -i eth1 -d 10.0.0.3 --dport 53 -j ACCEPT iptables -t mangle -A PREROUTING -p tcp -i eth1 --dport 80 -j ACCEPT iptables -t mangle -A PREROUTING -p tcp -i eth1 -d 10.0.0.3 --dport 6000:7000 -j ACCEPT iptables -t mangle -A PREROUTING -i eth1 -j ULOG iptables -t mangle -A PREROUTING -i eth1 -j DROP Now, when I try to ping the windows system from within the Linux system, it does not reply, I guess thats perfectly normal, because iptables is blocking ping responce. Same when I try to ping the Linux system from within the Windows. But when I try to access any web page from within the Windows system, I would expect that this action should get logged by iptables. But thing is, I don't see any of that kind of lines in log file (If I am looking in the right place, that is. :) It is at /var/log/messages, isn't it?). So, what do you think might be the problem here? I should note, that this is the first time I'm using linux, so don't expect ANY working knowledge of Linux at all... :) Also, since english is not my mother tongue, feel free to point out any gramatical mistakes... :) Thanks for any advice.

    Read the article

  • Interface design where functions need to be called in a specific sequence

    - by Vorac
    The task is to configure a piece of hardware within the device, according to some input specification. This should be achieved as follows: 1) Collect the configuration information. This can happen at different times and places. For example, module A and module B can both request (at different times) some resources from my module. Those 'resources' are actually what the configuration is. 2) After it is clear that no more requests are going to be realized, a startup command, giving a summary of the requested resources, needs to be sent to the hardware. 3) Only after that, can (and must) detailed configuration of said resources be done. 4) Also, only after 2), can (and must) routing of selected resources to the declared callers be done. A common cause for bugs, even for me, who wrote the thing, is mistaking this order. What naming conventions, designs or mechanisms can I employ to make the interface usable by someone who sees the code for the first time?

    Read the article

  • New release of Oracle Process Accelerators (11.1.1.6.2

    - by JuergenKress
    Press release delivers Industry-focused solutions including Incident Reporting (Public Sector) and Loan Origination (Financial Services). lSO updated Travel Request Management (TRM), Document Routing and Approval (DRA), and Internal Service Requests (ISR). All BPM material is available at our SOA Community Workspace (SOA Community membership required): Presentation | Data Sheet | White paper Download Process Accelerators and Collateral - OTN SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: bpm,process accelerators,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Can't get 1440x900 resolution with GRUB2 although vbeinfo says it's available

    - by TomSW
    I'm trying to use GRUB2 in graphical mode with 1440x900 resolution, but the result is always garbled nonsense: the highest resolution I can get is 1280x800. Word is from googling that long as vbeinfo lists a resolution, GRUB2 can use it. This doesn't seem to be true: vbeinfo says that 1440x900 is available but it doesn't work. Testing it from the GRUB2 command line: set gxfmode=1440x900 terminal_output gfxterm # -> garbled nonsense # back to trusty 640x480 terminal_output console The graphics card is an Intel GM965. Once linux boots the framebuffer switches to 1440x900. Added after epheminent's reply and various experiments vbeinfo lists two sets of modes. The first set runs from 0x160 to 0x16b, with resolutions 768x480, 960x600, 1280x800 and 1440x900 Then - after a bunch of text-only modes - the second set, containing resolutions 1024x768, 800x600, and 640x480 The first set of modes aren't altered by 915resolution. They all work except 1440x900. The resolution of modes in the second set can be altered using the 915resolution module / command available in GRUB2 = 1.99. # in /boot/grub/grub.cfg insmod 915resolution # 30, 32, 34 all work for me: all that varies is which modes are altered 915resolution 30 1440 900 # setting an impossible resolution changes the mode to "text-only" # in my case 1280x1024 is not supported 915resolution 30 1280 1024 Clearly, 1440x900 should just work: adding it with 915resolution is just a workaround.

    Read the article

  • Is it okay for an application to check for automatic updates in less than 20 hour interval?

    - by FlameStream
    I have a desktop application that has the ability to automatically update itself on the next restart (without the user's consent - but this is another issue altogether). Assuming that the user would never notice anything related to application updating (such as a progress bar, or pop-up requiring restart), and that our server would support the request spam load, is there any reason why it should not check for updates in less than 20 hour interval? The reason I'm asking this is because all applications that I know that have auto-update capability check for update every 20 to 24 hours and at startup. I was just wondering if there was an ethical rule about it, or simply because of the risk of overloading the server.

    Read the article

  • How to teach your users/customers to send better error descriptions

    - by ckeller
    I often have to deal with customers or users which are reporting errors in applications. Most of the time their content is something useless as ERROR!!! x does not work without much more information. For resolving the issue I have to request every single detail of them, which often is more time consuming than fixing the issue itself. Other send information in formats which are not ideal, like screenshots (of data records, not of errors) although they could send a link (we have access to the systems) and so on. How do you tell your users/customers to describe the problems with more details so that the whole process could be easier for both sides? edit This question is more about the social skills, than how to achieve programmatic collection of logs and error information. I'm aware of the fact that this should be part of good software design.

    Read the article

  • 2 Google Tag Manager containers = double triggers?

    - by fred
    I have a Joomla website with existing analytics done by a GTM tag and I need to add another image tag (1x1 pixel) from a separate GTM container. I set the event name for the additional GTM container to a different name from from the existing GTM container (default 'gtm.js') so that the new tag only fires under the specified event. The new container tested out fine in a blank HTML page, but when it is put in the website, it ends up firing twice. I know that because Firebug showed 2 1x1 pixels being loaded and I mark each request with a randomly-generated UID to distinguish them on the server side. I suspect this being caused by having multiple GTM container tags but want to check whether anyone has run into this problem before? As of now, I could not verify nor fix the double counting problem.

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >