Search Results

Search found 11167 results on 447 pages for 'financial close'.

Page 306/447 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • Apache taking up a lot of CPU while running request-tracker4

    - by bhowmik
    I am trying out a request-tracker installation on an EC2 micro instance. The specs for the micro instance are as follows 1) Ubuntu 12.04 64bit, 613MB RAM, 8GB Hard Drive 2) Running request-tracker 4.0.4 from the repository, perl 5.14.2, Apache2, MySQL5 3) Request-tracker4.0.4 running with mod_perl2 and Worker mpm 4) Apache configured with Worker MPM. Config snippet given below Timeout 150 KeepAlive On MaxKeepAliveRequests 60 KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> Now when I start Apache2 it works fine for some time and after a while the CPU load shoots up to 99% or more. Usually it is one or more Apache processes doing this. I've tried a to modify the worker module configuration without any luck. The log files for both Apache2 and request-tracker4 are set to log debug messages and don't show anything to indicate what could be causing this. The system gets a maximum of 5 users at any given time and usually (90% of the time) it is just 2. I've just installed it and we only have 20 tickets in the database. I don't think its the memory thats causing the issue since the server isn't swapping or even close to it and I hardly see the memory usage go up. Would appreciate any pointers on how to go about troubleshooting this. In case it helps I've also tried this out a similar installation on a small instance (Identical settings except RAM bumped upto 1.7GB) and I still see the issue.

    Read the article

  • Testing realistic loads for new versions of existing web app

    - by David Cournapeau
    Assuming I have a relatively complex web application, I am interested in testing performances of a new version using a traffic as realistic as possible. Traffic is relatively complex (session-based, lots of internal logic which depends on incoming requests). The webapp depends on many servers (databases, frontends, etc...). I can think of two basic directions: Recording every incoming request with its timestamp in production in a centralized manner and replaying it from N clients to reproduce a load as close as possible as the original. Issue: because we have many servers, getting the centralized log is not trivial. having a system duplicating requests to a staging area so that I could "plug" a dev version of my webapp to it at anytime without affecting the production. Issue: I have not found much information about it expect this, which suggests to me that may not be the best solution. OTOH, it is realistic by definition. What is the standard way of doing this kind of testing ? I did not find much information about load testing with complex, realistic traffic.

    Read the article

  • SSH freeze when UFW is enabled

    - by Cristian Vrabie
    I have a small Ubuntu 10.10 server and i recently noticed a weird behavior (not sure if it was happening before). If I have ufw enabled (with default deny all in, allow all out, allow all http, allow all on a random port i use for ssh) when i perform some actions in a ssh sesion, the ssh console completely freezes. The server continues to work and if i close the console i can start another ssh session. This happens no matter from where I log in (tried from another ubuntu and a mac). The actions are fairly reproducible, for example vim some config files (though vim-ing other files works), cat some other file, etc. The freeze never happens if ufw is disabled. Any idea what's going on? Thanks! Cristian Addition: if you're wondering, yes, I have TcpKeepAlive on yes and I doubt is related (it would happen with ufw disabled too) As requested: my ufw conf below. Also, i don't know if it has something to do but the server has 2 ips. On one is configured the ssh domain, and on one to serve hhtp (via apache2) Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 19922/tcp ALLOW IN Anywhere 9418/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere

    Read the article

  • nginx and proxy_hide_header

    - by giskard
    When I curl for a URL I get this answer back: > < HTTP/1.1 200 OK < Server: nginx/0.7.65 < Date: Thu, 04 Mar 2010 12:18:27 GMT < Content-Type: application/json < Connection: close < Expires: Thu, 04 Mar 2010 12:18:27 UTC < http.context.path: /1/ < jersey.response: com.sun.jersey.spi.container.ContainerResponse@17646d60 < http.custom.headers: {Content-Type=text/plain} < http.request.path: /2/messages/latest.json < http.status: 200 < Transfer-Encoding: chunked I want to remove < http.context.path: /1/ < jersey.response: com.sun.jersey.spi.container.ContainerResponse@17646d60 < http.custom.headers: {Content-Type=text/plain} < http.request.path: /2/messages/latest.json < http.status: 200 So I used the proxy_hide_header directive in this way: location / { if ($arg_id) { proxy_pass http..authorized; break; } proxy_pass http..anonymous; proxy_hide_header http.context.path; proxy_hide_header jersey.response; proxy_hide_header http.request.path; proxy_hide_header http.status ; } But it doesn't work. any clues?

    Read the article

  • Choosing the right e-mail client

    - by CFP
    Hi all, I'm currently using Outlook 2007 (under windows 7), but I much prefer free software (open source being the best of course), so I thought I'd ask for expert advice here. I thought it might be easier if I included a small "wanted list": I receive about 15 to 30 e-mails every day, but I have large archives (10'000 emails), which I frequently need to access. I usually open and close my mail program many times, so I'd like it to start pretty fast I cannot use an online mailbox, because I have too many email addresses (about 5: 1 for work, 1 for home, 1 semi-private, 1 for specific emails, and 1 for newletters By order of importance, the things I'd like my mail client to be able to: Efficiently categorize e-mails. Until now, I've mostly been using Outlook folders, because filtering by tags was not easy, but I'd rather one large list of mails, neatly tagged so I can easily filter. I'd love being able to select mails by tags (eg in a click or too (could be a tab) show all mails tagged with "software") Create "tagging rules", such as "if the mail was sent to this address, add this tag", or "if the body contains ..., add that tag" Sync contacts with Gmail, handle tasks (syncing with toodledo would be awesome), possibly provide a calendar Create e-mail templates, signatures... Other ideas: A timeline, scripting support, being able to import MS Outlook emails, provide a nice backup format... Thanks for sharing ideas and suggestions!

    Read the article

  • Windows 8 Doesn't Shutdown Properly With Fast Start-Up Enabled

    - by Patrick
    While Fast start-up is enabled, on turning the computer off (shutdown) the computer idles for about 5min after logging out/screen turning off. It then turns off. On returning into Windows I receive the error message saying Windows didn't shut down properly. Hibernate works fine, and I am told this shouldn't be the case - If one doesn't work, neither should. It works when both Fast start-up is enabled and disabled, as does restart and sleep. Windows is installed under UEFI. The UEFI ultra fast boot option for my motherboard cannot be enabled as my GPU doesn't support some UEFI GOP tech. As far as I know, not related to windows fast start-up, but thought it was worth mentioning. To clarify, if this: http://www.eightforums.com/tutorials/6320-fast-startup-turn-off-windows-8-a.html is enabled, the computer does not shut down properly. EDIT: Some more information on the matter: Formatting didn't fix the issue. Still fails regardless of drivers installed. Hardware was purchased ~6months ago. Running a good SSD. Event viewer Always these two messages in close succession: Error (event ID 6008): The previous system shutdown at 7:45:21 PM on ?27/?10/?2012 was unexpected. Critical (kernel power, event ID 41): The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Upon installing WPT as suggested below to figure out what was happening during shutdown, and running the cmd xbootmgr -trace shutdown -noPrepReboot -traceFlags BASE+CSWITCH+DRIVERS+POWER -resultPath C:\TEMP Windows fast start-up is now working consistently. Still works upon uninstalling WPT. This is the only change to occur on the computer. Nothing else has bee installed/uninstalled, no Windows Updates, nothing. Windows fast start-up did not work prior to installing WPT and running the cmd (made sure I tested).

    Read the article

  • PC shut downs automatically after 10-20 second. No POST screen, no beeps

    - by emzero
    I have this not-so-old computer that's not being used for a year or so. Specs: Motherboard: ASUS PN5-E SLI CPU: Intel Core2Duo E4300 RAM:2x2GB SuperTalent DDR2-800 VGA: Zogis GeForce 7950GT PSU: Vitsuba San-55-S 550w HD: No hardrives yet When I power on the computer, everything seem to start, but right away the whole system shuts down. I've removed and changed the RAM sticks, take out the VGA, everything I could think of. So what could it be causing this? The PSU? The motherboard is dead? The CPU? Any help to isolate the problem will be useful. Thanks PS: Please don't close the question, this could be helpful to anybody having a similar problem, even with different hardware. UPDATE I've removed the old thermal paste and apply a brand new one. I also cleaned every dust using a high pressure gas dust remover. Checked for bad capacitors, all of them seem ok. Opened the PSU, removed big giant dust balls, cleaned with high pressure dust remover. Still the same problem, but now it stays powered on for almost 20 seconds maybe. But no POST screen, no beeps at all, nothing. So I suspect it's a motherboard or PSU failure. Unfortunately I don't have an energy tester to test the PSU... Don't know what else to try. I don't have another 775-motherboard to test the CPU, RAM and VGA with it.

    Read the article

  • How to configure IIS for SVG and web testing with Visual Studio?

    - by macias
    Let's say I have a simple web page with svg image in it: <img src="foobar.svg" alt="not working" /> If I make this page as static html page and view it directly svg is displayed. If I type the address of this svg -- it is displayed. But when I make this as .aspx page and launch it dynamically from Visual Studio I get alt text. If I type the address of this svg (from localhost, not as a local file) -- browser tries to download it instead of displaying. I already defined mime type in IIS (for entire server -- "image/svg+xml") and restarted IIS. Same effect as before. Question: what should I do more? Update WireShark won't work (it is in documentation), I tried also RawCap, but it cannot trace my connection (odd), luckily Fiddler worked: From client: GET http://127.0.0.1:1731/svg/document_edit.svg HTTP/1.1 Host: 127.0.0.1:1731 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Answer from server: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Thu, 16 Feb 2012 11:14:38 GMT X-AspNet-Version: 4.0.30319 Cache-Control: private Content-Type: application/octet-stream Content-Length: 87924 Connection: Close <?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns: *** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. *** For the record, here is useful Q&A for Fiddler: http://stackoverflow.com/questions/826134/how-to-display-localhost-traffic-in-fiddler-while-debugging-an-asp-net-applicati

    Read the article

  • Windows, why 8 GB of RAM feel like a few MB?

    - by Desmond Hume
    I'm on Windows 7 x64 with 4-core Intel i7 and 8 GB of RAM, but lately it feels like my computer's "RAM" is located solely on the hard drive. Here is what the task manager shows: The total amount of memory used by the processes in the list is just about 1 GB. And what is happening on my computer for a few days now is that one program (Cataloger.exe) is continually processing large quantities of (rather big) files, repeatedly opening and reading them for the purposes of cataloging. But it doesn't grow too much in memory and stays about that size, about 90 MB. However, the amount of data it processes in, say, 30 minutes can be measured in gigabytes. So my guess was that Windows file caching has something to do with it. And after some research on the topic, I came across this program, called RamMap, that displays detailed info on a computer's RAM. Here is the screenshot: So to me it looks like Windows keeps in RAM huge amounts of data that is no longer needed, redirecting any RAM allocation requests to the pagefile on the hard drive. Even when I close Cataloger.exe, the RamMap reports the size of the mapped file as about the same for a long time on. And it's not just this particular program. Earlier I noticed that similar slowdown occurred after some massive file operations with other programs. So it's really not an exception. Whatever it is, it slows down the computer by like 50 times. Opening a new tab in Chrome takes 20-30 seconds, opening a new program can take up to a minute. Due to the slowdown, some programs even crash. So what do you think, is the problem hiding in file caching or somewhere else? How do I solve it?

    Read the article

  • Delete ARP cache on Mac OS when moving from one Wifi network to the other

    - by Puneet
    I am facing wireless connectivity problems when I move from one Wifi network to the other. Here is how it happens: I am at my friends place. I connect to his Wifi. His Wifi router ip address is 192.168.0.1. Everything is fine I close my laptop, come back to my house, open my laptop and I connect to the Wifi Network at my place. Different ESSID, but the Wifi router address is the same 192.168.0.1. At this point I cant get to anything on the internet. To debug I try to see if I can ping the router (192.168.0.1), I cant. I get a no route to host. Meanwhile airport tells me Im connected to Wifi. I see the arp cache and I see a permanent entry for 192.168.0.1 ? (192.168.0.1) at 5c:d9:98:65:73:6c on en1 permanent [ethernet] This permanent bit looks problematic. I go ahead and delete the arp cache entry and all is fine with the world until I go back to my friends place where the same situation plays out. Now my question is, why the hell is this happening? If there is no way around it, can I run a script on Wifi connect/disconnect to clear out the arp cache? Im using Mac OS X $uname -a Darwin 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386

    Read the article

  • How can i use the `eject` command on a computer i have SSH'd into?

    - by will
    So if i do eject on my machine, it works exactly as expected, however, if i ssh into the machine next to me, and do the same thing, it does not work... my computer: eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/dev/cdrom' eject: `/dev/cdrom' is a link to `/dev/sr0' eject: `/dev/sr0' is not mounted eject: `/dev/sr0' is not a mount point eject: checking if device "/dev/sr0" has a removable or hotpluggable flag eject: `/dev/sr0' is not a multipartition device eject: trying to eject `/dev/sr0' using CD-ROM eject command eject: CD-ROM eject command succeeded other computer: eject: using default device `cdrom' eject: device name is `cdrom' eject: expanded name is `/dev/cdrom' eject: `/dev/cdrom' is a link to `/dev/sr0' eject: `/dev/sr0' is not mounted eject: `/dev/sr0' is not a mount point eject: checking if device "/dev/sr0" has a removable or hotpluggable flag eject: `/dev/sr0' is not a multipartition device eject: unable to open `/dev/sr0' if i look in the /dev/ dir, then i find cdrom which is a symlink to sr0 - as mentioned by the verbose outputs of eject -v. On my machine, if i try and look at it, if the drive is open, it will close it, and then give this: $ less sr0 sr0 is not a regular file (use -f to see it) so $ less -f sr0 sr0: No medium found but if i do it on the other computer, $ less -f sr0 sr0: Permission denied so i look at the files more, and get this on both machines: $ ls -la sr0 brw-rw----+ 1 root cdrom 11, 0 Nov 12 10:13 sr0 Does anyone know a way around this? I do not have root access.

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by par
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). OS: Ubuntu Apache "Keep Alive" option is set to 0

    Read the article

  • How can I cause Task Scheduler to "fail" if a dialog box returns a certain result?

    - by Roger
    I'm working on a VBScript to do a weekly reboot of all machines on our network. I want to run this script via Task Scheduler. The script runs at 3:00 AM, but there is a small chance that users may still be on the network at that time, and I need to give them the option to terminate the reboot. If they do so, I would like the reboot to occur the next night at 3:00 AM. I've set Task Scheduler up to repeat in this way. So far, so good. The problem is that if the user selects "Cancel" in my script, the Task Scheduler does not see my task as failed, and won't run it again the next night. Any ideas? Can I pass an errorcode to task scheduler or otherwise abort the task via VBScript? My code is below: Option Explicit Dim objShell, intShutdown Dim strShutdown, strAbort ' -r = restart, -t 600 = 10 minutes, -f = force programs to close strShutdown = "shutdown.exe -r -t 600 -f" set objShell = CreateObject("WScript.Shell") objShell.Run strShutdown, 0, false 'go to sleep so message box appears on top WScript.Sleep 100 ' Input Box to abort shutdown intShutdown = (MsgBox("Computer will restart in 10 minutes. Do you want to cancel computer restart?",vbYesNo+vbExclamation+vbApplicationModal,"Cancel Restart")) If intShutdown = vbYes Then ' Abort Shutdown strAbort = "shutdown.exe -a" set objShell = CreateObject("WScript.Shell") objShell.Run strAbort, 0, false End if Wscript.Quit Appreciate any thoughts.

    Read the article

  • Monitor displays at 1024x768; scrolls screen for higher resolutions

    - by Matt
    I have a dual monitor setup. Normally, they both display at 1680x1050. They have been setup this way for about a year. I'm using Windows XP Professional 2003 x64 SP2. Today, out of nowhere, one of the monitors kicked back to a lower resolution. I was not playing with any configuration at the time.. in fact all I had done was close a window (maybe a browser). But the thing is that the resolution is still preserved partially by the fact that the screen will scroll when you move the mouse. So it's like looking through a 1024x768 window into a 1680x1050 world. The monitor itself does not appear to be damaged, because I also have it connected to my netbook (via KVM) and higher resolutions work fine. I tried uninstalling/reinstalling the drivers to no avail. System restore doesn't help either. I'm unsure of the exact ATI card I'm using.. Device Manager lists it as "Radeon X300/X550/X1050". There is no Catalyst Control Center software installed. I tried to install it, but there doesn't seem to be a way to install it by itself ... it forces you to install another driver, which breaks both of my displays, forcing me to go into safe mode and run system restore again. Any ideas? Thanks

    Read the article

  • Logitech QuickCam Pro 9000 & Windows 7 64-bit failing miserably

    - by Saxtus
    I am trying to install a Logitech QuickCam Pro 9000 webcam to Windows 7 64-bit. If I do it without using the Logitech drivers but instead the Windows Update ones, the camera works with low frame rate and without face tracking and all other bells and whistles that it's full driver provides. The moment I install the latest official Logitech driver, the problems begin: Camera works fine, until I decide to go to audio settings of the LWS panel or Windows'. Then LWS freezes and with it everything that tries to output audio. I am not able to open Playback/Recording devices window (it just doesn't appear) and system gets unstable and slow with LWS.EXE process not been able to close forcefully. If I reboot and forget the camera connected, this situation continues and system gets unstable from the beginning. If I reboot without the camera connected, everything works fine until I connect it and try to do something with audio settings of Windows or LWS panel. I should note, that until the freezing occurs, camera works as expected, with full frame rate, face tracking and everything that is expected to do. The soundcard is the ASUS SupremeFX II of the ASUS Striker II Extreme motherboard. Any ideas of what is causing this or what else to try so I can make it work as advertised? Thank you.

    Read the article

  • .htaccess redirect to error page if port is not 80

    - by Momo
    I'm running a portable server through usb stick. The thing is I also have WAMP installed in my local machine and Apache somehow gets started on windows startup, because of some random reason which I don't recall now and it can't be changed. I want to prepare my portable server in situations like this, so closing httpd.exe from process and starting my portable server is not an option. Anyway, because of already active httpd.exe my portable server's WordPress site can only be accessed through localhost:81 - this is a problem as WP site is very dependent on the URL and I don't want to include the url with port on WP database. Here is what I want to do through .htaccess: On any path except for error.php file check if not port 80 If not port 80 redirect to /error.php?code=port It it possible for it to have priority over WP redirection or URL handling? In the error.php I provided info on how to manually close httpd.exe and such so my family and friends can access the portable site. It's sort of like a gallery and calender application for events and other such stuff... Please help? I'm I can't figure it out at all. I know others may not have apache already running, but I want to prepare for such a situation. Something like the following, but the following doesn't work. # BEGIN WordPress <IfModule mod_rewrite.c> <If "%{SERVER_PORT} = 80"> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </If> <Else> RewriteEngine On RewriteRule ^(error.php)($|/) - [L] RewriteRule ^(.*)$ /error.php?code=port [L] </Else> </IfModule> # END WordPress By the way, the portable server Server2Go automatically generates vhosts based o the hostname set on it's config file and changes ports if the port (e.g. 80) is already open.

    Read the article

  • APC fragmentation on EC2 Micro for Wordpress + W3TC

    - by Maarten Provo
    I'm trying to optimize APC for my Amazon EC2 Micro server running one Wordpress-site with W3TC. I've started with the settings advised by TechZilla in another topic but I keep getting high fragmentation with 50% of space being free. I've uploaded an image to http://www.maartenprovo.be/downloads/apc.jpg but I can't post it here since I need at least 10 reputation. What values can I optimize to prevent fragmentation? [apc] apc.enabled=1 apc.shm_segments=1 ;32M per WordPress install apc.shm_size=164M ;Leave at 2M or lower. WordPress does't have any file sizes close to 2M apc.max_file_size=2M ;Relative to the number of cached files apc.num_files_hint=1000 ;Relative to the size of WordPress apc.user_entries_hint=4096 ;The number of seconds a cache entry is allowed to idle in a slot before APC dumps the cache apc.ttl=7200 apc.user_ttl=7200 apc.gc_ttl=3600 ;Auto update chache files on change in WP-ADMIN or W3TC apc.stat=1 ;This MUST be 0, WP can have errors otherwise! apc.include_once_override=0 ;Only set to 1 while debugging apc.enable_cli=0 ;Allow 2 seconds after a file is created before it is cached to prevent users from seeing half-written/weird pages apc.file_update_protection=2 ;Ignore files apc.filters apc.slam_defense = 0 apc.write_lock = 1 apc.cache_by_default=1 apc.use_request_time=1 apc.mmap_file_mask=/var/tmp/apc.XXXXXX apc.stat_ctime=0 apc.canonicalize=1 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.lazy_classes=0 apc.lazy_functions=0

    Read the article

  • Recommended open-source firmware for ASUS RT-N16

    - by MasterF
    I have recently acquired an ASUS RT-N16 router. My original plan for it was to install Tomato on it. However, after checking their website i found out that the firmware was not updated in the last 2 years. There seem to be a few updated mods but none of them really seemed mature/stable/well-documented. I would like to know what other people recommend as open-source firmware for this router. I know the answers will probably be subjective; so i will give a bit of background on my needs: for now i will only use the Wi-Fi on an Android phone the connection will not be shared with anyone (so QOS is optional) i want a stable (wired) connection on my PC (for online gaming etc.) i want the (wired) download/upload speeds to be as close as possible to those achieved by directly plugging the Ethernet cable to the PC's network card; i have a 100 Mbps connection my ISP uses PPPOE my technical level: i am a software developer and i have good knowledge of bash scripting, but no experience with networking Also, i know that i could probably just use the stock firmware (and maybe will use it for a while), but i'm interested in trying an open-source version (for more features, flexibility, as a learning exercise etc.)

    Read the article

  • Building a Web proxy to get around same-origin restrictions for collaborative Webapp based on a MEAN stack

    - by Lew Cohen
    Can anyone point to books, articles, blogs, or even applications - open-source or proprietary - that detail building a Web proxy? This specific proxy will exist to get around the same-origin restrictions that prevent, for instance, loading a given Website into an <iframe> in a Webapp. This Webapp is a collaborative application in which a group of users log in to the app's Website and can then load different Websites into this app's <iframe> and do various collaborative things (e.g., several users simultaneously browsing a Website, in synch). The Webapp itself is built on a MEAN stack (MongoDB, Express, AngularJS, and Node.js). The purpose of this proxy is not to do anonymous browsing or to bypass censorship. Information on how to build such a vehicle seems not to be readily available from my research. I've come across Glype but am not sure whether this is a feasible solution. I don't want to reinvent the wheel, so if a product is available for purchase, great. Else, we'd need to build one. The one that seems to be close is http://www.corsproxy.com. In effect, we'd like to re-create this since it evidently does what's needed. I don't care what server-side technology is used. Our app is MEAN-based, if that has any bearing. Also, the proxy has to obviously honor basic security considerations (user cookies, etc.) and eventually be scalable. So, anyone know of any sources that would detail how to build one of these? Is it even worth building if something already exists? If so, what would be a good candidate? Any other issues that should be considered with this proxy/application? Thanks a lot!

    Read the article

  • Word 2003 will not show up in Windows 7

    - by invadersil
    I just installed Windows 7 over the holiday and it went swimmingly well. Today I finished up a few things like installed MS Office 2003. That went well too, until I tried to open up Word. When I try to open up Word on its own, it comes up in the application bar but the application window does not show. I use Word as the editor in Outlook which does work. I also discovered that I can start it up in safe mode and it will work normally. But normal startup just doesn't show me anything. Oddly, if I start typing stuff while the app is selected in the app bar and then try to close it, it pops up a message asking if I want to save it. I tried running the compatibility utility within Windows 7 but still no dice. Has anybody seen this issue yet? The other Office apps start normally. Edit: More info: Windows 7 Pro 64-bit. Office is patched up to SP3. And last time I checked, there were no updates either (and fully updated with KBs after SP3) And I did a fresh install of Windows 7.

    Read the article

  • Kernel Memory Leak in Ubuntu 9.10?

    - by kayahr
    After some days of work (Using suspend-to-ram during the night) I notice I loose more and more available memory. Even when I close all applications the situation doesn't improve. I even went down to the command line and closed ALL running processes except the init process and the bash I'm working in. I unmounted all these ram disks which Ubuntu is using, I even unloaded all modules which could be unloaded. But still "free" tells me that 1 GB of RAM is used (without buffers/cache). In "top" there is no visible process which occupies all this memory. The only way to free the memory is restarting the machine. How can I find out where I lose all this memory? Is there a known "suspect" who can cause a problem like this? I'm using Ubuntu 9.10 64 bit on a Dell Latitude E6500 (4 GB RAM) with the latest closed-source nvidia driver and Gnome with Compiz. The applications I use most of the time are firefox and eclipse. Any hints how I can find the problem? I'm not a kernel hacker so if the solution is patching the kernel or something like that then I might be out of the game...

    Read the article

  • ssl port didnt work on nginx

    - by Jin Lin
    I set up the unicorn and nginx on one of my ec2 machine. and my request are loading ok with nginx listen to port 80. but when I enable it to ssl, which listen to port 443. It doesn't work. and it can still work with port 80, https. server { listen 443 ssl; # replace with your domain name server_name domain.com; # replace this with your static Sinatra app files, root + public root /home/ubuntu/domain/public; ssl on; ssl_certificate /etc/ssl/domain.crt; ssl_certificate_key /etc/ssl/domain.key; # maximum accepted body size of client request client_max_body_size 4G; # the server will close connections after this time keepalive_timeout 5; location ~ ^/assets/ { add_header ETag ""; gzip_static on; expires max; add_header Cache-Control public; } location / { proxy_set_header X-Forwarded-Proto https; try_files $uri @app; } location @app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; # pass to the upstream unicorn server mentioned above proxy_pass http://unicorn_server; } }

    Read the article

  • Build suggestion for mini itx server [on hold]

    - by Spyros
    I am looking to create a home server and since there does not seem to be a nice ready made one to purchase, i thought of building a mini itx one. I am not familiar with the best hardware/case to pick and therefore i thought i would ask for your experience. I will most probably install a simple linux distro, probably Debian and most probably without a desktop environment as well. Therefore, I am not looking for the best graphics card, though a decent one is preferred for sure. I would say that the most important thing is that the case/hardware play well with each other and the machine has decent specs ( i think something like 8gb RAM and 1TB hdd is most probably my most important things). I understand that this question can be closed as subjective, but it would be a great help for me if some of you guys who have been doing a similar build, can suggest something. I tried to find a stack exchange variant for the question, but it seems that this one more closely relates to the subject. But I will definitely understand it if people close it due to being subjective in nature. If you can maybe move it somewhere or so, please feel free to do so. Thank you for your time :)

    Read the article

  • Can I run Excel 2010 on a server?

    - by Glen Little
    This question is not about a person using Excel on a computer that happens to have an Windows Server OS. And it is not about using any Sharepoint services features! The question is about automated processes that use code (Office Automation) to open Excel files, manipulate them, run calculations, read data, save copies of the file and close the files... all in code. In previous versions of Excel the licensing agreement prevented use on a public server, notes from Microsoft warned about the problems trying to use Office Automation in a server environment, and we were warned that Excel was single threaded and not designed for use on a server. Most of the articles about this were written before Office 2010. But now, Excel 2010 is designed to work on a High Performance Computing server using HPC Services for Excel. One HPC document mentions "Windows HPC Server 2008 R2 includes a comprehensive pop-up manager that can handle occasional dialog boxes and pop-up messages". So my question is... is it now "safe" to run code that automates Excel 2010 on a "normal" server without using the HPC services? If not, can the HPC Services for Excel work on a single server? I don't need the high performance, distributed computing, aspect of HPC Services for Excel... just the ability to run Excel on a server. Can that now be done? Thanks, Glen

    Read the article

  • Enrich a dataset of POIs with OpenStreetMap

    - by zero
    update: due to some hints of users - eg oliver Salzburg and slhck i have been aware of gis.stackexchange.com - so i moved the topic on my own: Plz can you or somebody who has the permission close the article - since we do not need this topic on two sites. Thx for your work. KEEP up the service here! STACK-sites rock. I have a list of POIs, some with a full description and some with only a few data entries, like the following: 6.9441000 50.9242000 [50677] (Ital) Casa di Biase [Köln] 6.9373600 50.9291800 [50674] (Ital) Al Setaccio [Köln] However, I need the full dataset. Can I get this somewhere? If I have all the position data, is it possible to find the rest? a. name of the street b. name of the town So for example, the data should finally look like this: 10.5346100 52.1613600 [38300] (Chin) Wanbao Kommissstr.9 [Wolfenbüttel] 13.2832500 52.4422600 [14167] (Ital) LaPergola Unter den Eichen 84d [Berlin] 13.3177700 52.5062900 [10625] (Chin) Good Friends Kantstr.30 [Berlin] Can I do this with OpenStreetMap? Should I parse OpenStreetMap data? Or OpenBabel?

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >