Search Results

Search found 1390 results on 56 pages for 'joe k 1973'.

Page 12/56 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Linking session state between servlets and EJBs?

    - by wilth
    Hello, I have servlets (in a web module) that access stateless EJB beans (in an EJB module). The EJB module is built using SEAM. Users can have different roles and the EJB services check this using Seam's Identity. I also use a customized Authenticator (although this might not be relevant here). I noticed problems with this approach and I'm suspecting that the session context in the servlets is not "linked" with the session context in the EJB beans. What I think happens is something like: User Joe access servlet A and is assigned Session W1. Servlet A calls a login function on an EJB, using the EJB session E1. Later, user Mary accesses servlet A and is assigned Session W2. When calling the EJBs, however, the EJB session E1 is used and therefore Mary is authenticated as Joe. What also happens is that when Joe is calling the servlet twice in rapid succession, the same session W1 is used, but two different sessions E1 and E2 in the business layer, causing errors. I might be wrong in my suspicion, but maybe I'm actually expecting these "sessions" to be linked together while they in fact are not. If this is true, is there any way of achieving this? I could - of course - use stateful beans and save the authentication information in the beans, but this would break the "Identity" concept of Seam (and in general, it would be preferable to be able to use the Session context in my EJB beans). Any help and pointers are very welcome - thanks! Technology: EJB3, Seam 2.1.2. The servlets are actually the server-side of a GWT app, although I don't think this matters much. I'm using JBoss 5.

    Read the article

  • SQL View with Data from two tables

    - by Alex
    Hello! I can't seem to crack this - I have two tables (Persons and Companies), and I'm trying to create a view that: 1) shows all persons 2) also returns companies by themselves once, regardless of how many persons are related to it 3) orders by name across both tables To clarify, some sample data: (Table: Companies) Id Name 1 Banana 2 ABC Inc. 3 Microsoft 4 Bigwig (Table: Persons) Id Name RelatedCompanyId 1 Joe Smith 3 2 Justin 3 Paul Rudd 4 4 Anjolie 5 Dustin 4 The output I'm looking for is something like this: Name PersonName CompanyName RelatedCompanyId ABC Inc. NULL ABC Inc. NULL Anjolie Anjolie NULL NULL Banana NULL Banana NULL Bigwig NULL Bigwig NULL Dustin Dustin Bigwig 4 Joe Smith Joe Smith Microsoft 3 Justin Justin NULL NULL Microsoft NULL Microsoft NULL Paul Rudd Paul Rudd Bigwig 4 As you can see, the new "Name" column is ordered across both tables (the company names appear correctly in between the person names), and each company appears exactly once, regardless of how many people are related to it. Can this even be done in SQL?! P.S. I'm trying to create a view so I can use this later for easy data retrieval, fulltext indexing and make the programming side simpler by just querying the view.

    Read the article

  • Design pattern to keep track UITableView rows correspondance to underlying data in constant time.

    - by DenNukem
    When my model changes I want to animate changes in UITableView by inserting/deleting rows. For that I need to know the ordinal of the given row (so I can construct NSIndexPath), which I find hard to do in better-than-linear time. For example, consider that I have a list of addressbook entries which are manualy sorted by the user, i.e. there is no ordering "key" that represents the sort order. There is also a corresponding UITableView that shows one row per addressbook entry. When UITableView queries the datasource I query the NSMUtableArray populated with my entries and return required data in constant time for each row. However, if there is a change in underlying model I am getting a notification "Joe Smith, id#123 has been removed". Now I have a dilemma. A naive approach would be to scan the array, determine the index at which Joe Smith is and then ask UITableView to remove that precise row from the view, also removing it form the array. However, the scan will take linear time to finish. Now I could have an NSDictionary which allows me to find Joe Smith in constant time, but that doesn't do me a lot of good because I still need to find his ordinal index within the array in order to instruct UITableView to remove that row, which is again a linear search. I could further decide to store each object's ordinal inside the object itself to make it constant, but it will become outdated after first such update as all subsequent index values will have changed due to removal of an object. So what is the correct design pattern to accurately reflect model changes in the UITableView in costant (or at least logarithmic) time?

    Read the article

  • SQL SERVER – Disable Clustered Index and Data Insert

    - by pinaldave
    Earlier today I received following email. “Dear Pinal, [Removed unrelated content] We looked at your script and found out that in your script of disabling indexes, you have only included non-clustered index during the bulk insert and missed to disabled all the clustered index. Our DBA[name removed] has changed your script a bit and included all the clustered indexes. Since our application is not working. When DBA [name removed] tried to enable clustered indexes again he is facing error incorrect syntax error. We are in deep problem [word replaced] [Removed Identity of organization and few unrelated stuff ]“ I have replied to my client and helped them fixed the problem. What really came to my attention is the concept of disabling clustered index. Let us try to learn a lesson from this experience. In this case, there was no need to disable clustered index at all. I had done necessary work when I was called in to work on tuning project. I had removed unused indexes, created few optimal indexes and wrote a script to disable few selected high cost indexes when bulk insert (and similar) operations are performed. There was another script which rebuild all the indexes as well. The solution worked till they included clustered index in disabling the script. Clustered indexes are in fact original table (or heap) physically ordered (any more things – not scope of this article) according to one or more keys(columns). When clustered index is disabled data rows of the disabled clustered index cannot be accessed. This means there will be no insert possible. When non clustered indexes are disabled all the data related to physically deleted but the definition of the index is kept in the system. Due to the same reason even reorganization of the index is not possible till the clustered index (which was disabled) is rebuild. Now let us come to the second part of the question, regarding receiving the error when clustered index is ‘enabled’. This is very common question I receive on the blog. (The following statement is written keeping the syntax of T-SQL in mind) Clustered indexes can be disabled but can not be enabled, they have to rebuild. It is intuitive to think that something which we have ‘disabled’ can be ‘enabled’ but the syntax for the same is ‘rebuild’. This issue has been explained here: SQL SERVER – How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’. Let us go over this example where inserting the data is not possible when clustered index is disabled. USE AdventureWorks GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL, CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Populate Table INSERT INTO [dbo].[TableName] SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' GO -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Fifth' GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Insert Data will fail INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO /* Error: Msg 8655, Level 16, State 1, Line 1 The query processor is unable to produce a plan because the index 'PK_TableName' on table or view 'TableName' is disabled. */ -- Reorganizing Index will also throw an error ALTER INDEX [PK_TableName] ON [dbo].[TableName] REORGANIZE GO /* Error: Msg 1973, Level 16, State 1, Line 1 Cannot perform the specified operation on disabled index 'PK_TableName' on table 'dbo.TableName'. */ -- Rebuliding should work fine ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Insert Data should work fine INSERT INTO [dbo].[TableName] SELECT 6, 'Sixth' UNION ALL SELECT 7, 'Seventh' GO -- Clean Up DROP TABLE [dbo].[TableName] GO I hope this example is clear enough. There were few additional posts I had written years ago, I am listing them here. SQL SERVER – Enable and Disable Index Non Clustered Indexes Using T-SQL SQL SERVER – Enabling Clustered and Non-Clustered Indexes – Interesting Fact Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Constraint and Keys, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows Backup to network share (Server 2008)

    - by Joe
    I'm trying to setup Windows Backup on a Server 2008 machine to backup to a network share. When I run the wizard to setup the backup I get an error message "The user name being used for accessing the remote share folder is not recognized by the local computer". I have no idea what this means. Help? The server with the network share is a domain controller (also server 2008). The server I am trying to back up is not and is not part of the domain.

    Read the article

  • LiteSpeed enable Access-Control-Allow-Origin (no response header on CORS request)

    - by Joe Coder Guy
    Seriously, I can't find a single page discussing this for litespeed. Using this format in the htaccess "Header set Access-Control-Allow-Origin http://aSite.com" (and https) sends the setting in the http response header, but I still get the "XMLHttpRequest cannot load https://aSite.com/aFile.php. Origin aSite.com is not allowed by Access-Control-Allow-Origin" error when trying to access https from http origin. Also, I receive no response header for https, only that message shows up in Chrome. Is the server still blocking it even though I've sent the proper headers? I read elsewhere that it helps to add these terms Access-Control-Allow-Headers X-Requested-With Access-Control-Allow-Methods OPTIONS, GET, POST Access-Control-Allow-Headers Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control but I don't see these in my headers. Using these, my PHP files aren't even reached (because they register no errors or anything), so it looks like it comes from the server only, but what do I know. Thanks in advance! Update Since no response header, Prashant seems to suggest it's a server issue in his error since it worked on another server. http://stackoverflow.com/questions/11953132/no-response-obtained-while-implementing-cors Anyone know how to flip this switch? Headers work now Bad litespeed format. Should look like this. Still being denied though. Header set Access-Control-Allow-Headers X-Requested-With Header set Access-Control-Allow-Methods OPTIONS Header set Access-Control-Allow-Methods GET Header set Access-Control-Allow-Methods POST Header set Access-Control-Allow-Headers Content-Type Header set Access-Control-Allow-Headers Depth Header set Access-Control-Allow-Headers User-Agent Header set Access-Control-Allow-Headers X-File-Size Header set Access-Control-Allow-Headers X-Requested-With Header set Access-Control-Allow-Headers If-Modified-Since Header set Access-Control-Allow-Headers X-File-Name Header set Access-Control-Allow-Headers Cache-Control

    Read the article

  • Litespeed enable Access-Control-Allow-Origin

    - by Joe Coder Guy
    Seriously, I can't find a single page discussing this for litespeed. Using this format in the htaccess "Header set Access-Control-Allow-Origin http://aSite.com" (and https) sends the setting in the header, but I still get the "XMLHttpRequest cannot load https://aSite.com/aFile.php. Origin aSite.com is not allowed by Access-Control-Allow-Origin" error. Is the server still blocking it even though I've sent the proper headers? I read elsewhere that it helps to add these terms Access-Control-Allow-Headers X-Requested-With Access-Control-Allow-Methods OPTIONS, GET, POST Access-Control-Allow-Headers Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control but I don't see these in my headers. Using these, my PHP files aren't even reached (because they register no errors or anything), so it looks like it comes from the server only, but what do I know. Thanks in advance!

    Read the article

  • How to debug Ubuntu/Cisco VPN issues

    - by Joe Casadonte
    I'm trying to connect an Ubuntu laptop (9.10) with some kind of Cisco VPN device; I don't know what's on the other end, and I'm not likely to find out exactly what. I know my company allows VPN from Linux clients because they provide one that I cannot get to install (it fails to compile). I've had the most luck with the network-manager-vpnc package, however I can't figure out what's failing. When I try to connect, I get this message from libnotify: The VPN connection 'XXX' failed. which is not very helpful. I've scoured the system logs and all I can find is this: Dec 27 12:57:45 jcasadon-lap NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.vpnc'... Dec 27 12:57:45 jcasadon-lap NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.vpnc' started (org.freedesktop.NetworkManager.vpnc), PID 2672 Dec 27 12:57:45 jcasadon-lap NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.vpnc' just appeared, activating connections Dec 27 12:58:00 jcasadon-lap NetworkManager: <info> VPN plugin state changed: 3 Dec 27 12:58:00 jcasadon-lap NetworkManager: <info> VPN connection 'AmericasEast' (Connect) reply received. Dec 27 12:58:00 jcasadon-lap NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/tun0, iface: tun0) Dec 27 12:58:00 jcasadon-lap kernel: [ 6144.529002] tun0: Disabled Privacy Extensions Dec 27 12:58:00 jcasadon-lap NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/tun0, iface: tun0): no ifupdown configuration found. Dec 27 12:58:15 jcasadon-lap NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/tun0, iface: tun0) Dec 27 12:58:15 jcasadon-lap NetworkManager: <info> VPN plugin failed: 1 Dec 27 12:58:15 jcasadon-lap NetworkManager: <info> VPN plugin state changed: 6 Dec 27 12:58:15 jcasadon-lap NetworkManager: <info> VPN plugin state change reason: 0 Dec 27 12:58:15 jcasadon-lap NetworkManager: <WARN> connection_state_changed(): Could not process the request because no VPN connection was active. Dec 27 12:58:15 jcasadon-lap NetworkManager: <info> (wlan0): writing resolv.conf to /sbin/resolvconf Dec 27 12:58:15 jcasadon-lap NetworkManager: <info> Policy set 'Northbound Train' (wlan0) as default for routing and DNS. Dec 27 12:58:27 jcasadon-lap NetworkManager: <debug> [1261936707.002971] ensure_killed(): waiting for vpn service pid 2672 to exit Dec 27 12:58:27 jcasadon-lap NetworkManager: <debug> [1261936707.003175] ensure_killed(): vpn service pid 2672 cleaned up I have no idea where to go from here. Tomorrow I'll ask the IT/IS guys if there's anything they can tell me from their end, but I don't know if they'll be able to tell me anything. Any ideas? Thanks!

    Read the article

  • Using an ATA-100 Hard Drive with a Thermaltake BlacX External Hard Drive Dock

    - by Joe
    Is it possible for a Thermaltake BlacX HDD Dock to connect to and recognize an ATA-100 Hard Drive? I know that the specifications for the BlacX say that it only supports SATA & SATAII, but I was hoping for one of three things: 1) for it to still work even though it isn't supported 2) for there to be some sort of workaround to make this possible 3) for there to be another part of some sort that I could purchase to make this work

    Read the article

  • West Palm Beach Developers&rsquo; Group Celebrates its Fifth Anniversary as a Member of INETA

    - by Sam Abraham
    Earlier this week marked our fifth anniversary as an INETA group, a fact that we had forgotten but thankfully INETA remembered. In celebrating our membership, INETA sent us a certificate recognizing our membership which we will be sharing with our members at our upcoming meeting. It‘s been a great two-year tenure for me as group co-coordinator working with Venkat Subramanian who had been involved with the group since its inception. Moving into the future we hope to grow both group membership and leadership. We continue to strive to bring added value to our membership which can only happen with your ideas, feedback and involvement in our community-driven group. Our next almost sold-out meeting will be taking place on 8/28/2012 6:30PM (Register at: http://www.fladotnet.com/Reg.aspx?EventID=607) . Will Strohl, DotNetNuke’s Technical Evangelist will be presenting to us an overview on getting started with DNN’s latest 6.2 version all while taking us on a deep dive into its built-in social networking integration features. There is still time to register, but don’t procrastinate! Our September meeting will feature Jonas Stawski, Microsoft MVP sharing with us on SignalR while October will bring us the much anticipated visit by our Microsoft Developer Evangelist Joe Healy who will be talking to us about the latest with Windows 8. Joe will be also presenting in Miami the next day after our event in case you miss his West Palm appearance. We look forward to meeting you at our upcoming meetings. All the best --Sam Abraham

    Read the article

  • Open vSwitch and Xen Private Networks

    - by Joe
    I've read about the possibilities of using Open vSwitch with Xen to route traffic between domUs on multiple physical hosts. I'd like to be able to group the multiple domUs I have spread out across multiple physical hosts into a number of private networks. However, I've found no documentation on how to integrate Open vSwitch with Xen (rather than XenServer) and am unsure how I should go about doing so and then creating the private networks described. As you might have gathered then - from research I think Open vSwitch can do what I need it to, but I just can't find anything giving me a push in the right direction of how to actually use it to do so! This may well be because Open vSwitch is quite new (version 1.0 released on May 17). Any pointers in the right direction would be much appreciated!

    Read the article

  • Vboxheadless without a command prompt (VirtualBox)

    - by joe
    I'm trying to run VirtualBox VM's in the background from a service. I'm having trouble starting a process the way I desire. I'd like to start the virtualbox guest in headless mode as a separate process and show nothing as far as GUI. Here's what I've tried: From command line: start vboxheadless -s "Ubuntu Server" In C#: ProcessStartInfo info = new ProcessStartInfo { UseShellExecute = false, RedirectStandardOutput = true, ErrorDialog = false, WindowStyle = ProcessWindowStyle.Hidden, CreateNoWindow = true, FileName = "C:/program files/sun/virtualbox/vboxheadless", Arguments = "-s \"Ubuntu Server\"" }; Process p = new Process(); p.StartInfo = info; p.Start(); String output = p.StandardOutput.ReadToEnd(); //BLOCKS! (output stream isnt closed) I want to be able to get the output to know if starting the server was a success. However, it seems as though the window that's spawned never closes its output stream. It's also worth mentioning that I've tried using vboxmanage startvm "Ubuntu Server" --type=vrdp. I can determine whether the server started properly using this. But it shows a new command prompt window for the newly started VirtualBox guest.

    Read the article

  • "SC.EXE config" and dollar sign in service name

    - by Joe Schmoe
    Say I want to make Windows Service startup dependent on SQL Server. In my case service name for SQL Server is MSSQL$SQL11 (SQL11 is SQL Server instance name) However, when I issue this command: SC.EXE config MyService depend= MSSQL$SQL11 everything after the dollar sign is ignored. When I go to "Dependencies" tab in "Services" SQL Server is not listed. When I check matching registry key it becomes clear why: it only has MSSQL. At this point I have to edit registry by hand to change MSSQL to MSSQL$SQL11 and then everything works as expected. Putting quotes around MSSQL$SQL11 doesn't help. Is there a way to specify $ in the middle of the SC.EXE argument string?

    Read the article

  • Format CD-rom on Windows 7 that Windows 95 can read

    - by Joe Majsterski
    I pulled out my ancient Pentium 100Mhz running Windows 95 to play a game from 1996. This game has a critical bug in it that requires a patch. The problem is, the computer has no way to connect to the Internet or to the LAN. I tried burning a CD-rom on my Windows 7 PC to run on the Win95 PC, but it doesn't even recognize that there's a disc in the drive. I did some research, and apparently Windows 95 can't read UDF format. All the solutions recommend, of course, downloading a driver or fix or somesuch, which is my entire problem in the first place. I tried formatting the CD-rom on my Win7 PC, but all the format choices are versions of UDF. Is there a way to get Windows 7 to format in way that is compatible with Windows 95? EDIT: I think the problem may be that I only have CD-RWs. I think a regular CD-R might work, but I can't find any in the house. I'll see if I can scrounge one up and try that.

    Read the article

  • Bash script 'while read' loop causes 'broken pipe' error when run with GNU Parallel

    - by Joe White
    According to the GNU Parallel mailing list this is not a GNU Parallel-specific problem. They suggested that I post my problem here. The error I'm getting is a "broken pipe" error, but I feel I should first explain the context of my problem and what causes this error. It happens when trying to use any bash script containing a 'while read' loop in GNU Parallel. I have a basic bash script like this: #!/bin/bash # linkcheck.sh while read domain do host "$domain" done Assume that I want to pipe in a large list (250mb say). cat urllist | ./linkcheck.sh Running host command on 250mb worth of URLs is rather slow. To speed things up I want to break up the input into chunks before piping it and then run multiple jobs in parallel. GNU Parallel is capable of doing this. cat urllist | parallel --pipe -j0 parallel ./linkcheck.sh {} {} is substituted by the contents of urllist line-by-line. Assume that my systems default setup is capable of running 500ish jobs per instance of parallel. To get round this limitation we can parallelize Parallel itself: cat urllist | parallel -j10 --pipe parallel -j0 ./linkcheck.sh {} This will run 5000'ish jobs. It will also, sadly, cause the error "broken pipe" (bash FAQ). Yet the script starts to work if I remove the while read loop and take input directly from whatever is fed into {} e.g., #!/bin/bash # linkchecker.sh domain="$1" host "$1" Why will it not work with a while read loop? Is it safe to just turn off the SIGPIPE signal to stop the "broken pipe" message, or will that have side effects such as data corruption? Thanks for reading.

    Read the article

  • Linux can't find file that exists

    - by Joe
    I'm trying to get Google's Dart language up and running, but it errors when running dart2js. I'm running Arch linux and I installed dart-sdk from AUR. Some relevant terminal output lies below. % dart2js main.dart /usr/local/bin/dart2js: line 7: /usr/local/bin/dart: No such file or directory % cat /usr/local/bin/dart2js #!/bin/sh # Copyright (c) 2012, the Dart project authors. Please see the AUTHORS file # for details. All rights reserved. Use of this source code is governed by a # BSD-style license that can be found in the LICENSE file. BIN_DIR=`dirname $0` exec $BIN_DIR/dart --allow_string_plus=false $BIN_DIR/../lib/dart2js/lib/compiler/implementation/dart2js.dart "$@" % file /usr/local/bin/dart /usr/local/bin/dart: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, BuildID[sha1]=0x27fe166ca015c1adfeaf3a6f9c018e7d7af46d9f, stripped % ls -alh /usr/local/bin total 4.9M drwxr-xr-x 2 root root 4.0K Jun 10 22:51 . drwxr-xr-x 12 root root 4.0K Jun 10 22:51 .. -rwxr-xr-x 1 root root 422K May 10 22:41 cargo -rwxr-xr-x 1 root root 2.7M Jun 10 22:50 dart -rwxr-xr-x 1 root root 360 Jun 6 16:20 dart2js -rwxr-xr-x 1 root root 176 Jun 6 16:20 pub -rwxr-xr-x 1 root root 166K May 10 22:41 rustc -rwxr-xr-x 1 root root 1.6M May 10 22:41 rustdoc % uname -rm 3.3.7-1-ARCH x86_64 Could it be because I'm running a 64bit OS and the dart binary is 32bit?

    Read the article

  • Disable click action after letting up a mouse wheel hold scroll in Chrome browser

    - by Joe Miller
    I apologize in advance for the confusing title, not sure what the best way to describe this action is. Basically, I am holding down the mouse wheel and then moving the mouse itself up and down to scroll (not actually rotating the mouse wheel forward or backward). This is often the most convenient way to scroll for me. Unfortunately, when I scroll in this way, and then let up the mouse wheel again, it performs a click action, so if the arrow happens to land on a link when I let up the mouse wheel, I end up inadvertently clicking that link. How can I prevent the mouse from performing a click action when I use the mouse wheel to scroll by holding it down and then letting it up when I am done scrolling? It seems like this is only happening in the Chrome browser. Thanks! Windows 7, Chrome Browser, Logitech Mouse

    Read the article

  • Node.js, Nginx and Varnish with WebSockets

    - by Joe S
    I'm in the process of architecting the backend of a new Node.js web app that i'd like to be pretty scalable, but not overkill. In all of my previous Node.js deployments, I have used Nginx to serve static assets such as JS/CSS and reverse proxy to Node (As i've heard Nginx does a much better job of this / express is not really production ready). However, Nginx does not support WebSockets. I am making extensive use of Socket.IO for the first time and discovered many articles detailing this limitation. Most of them suggest using Varnish to direct the WebSockets traffic directly to node, bypassing Nginx. This is my current setup: Varnish : Port 80 - Routing HTTP requests to Nginx and WebSockets directly to node Nginx : Port 8080 - Serving Static Assets like CSS/JS Node.js Express: Port 3000 - Serving the App, over HTTP + WebSockets However, there is now the added complexity that Varnish doesn't support HTTPS, which requires Stunnel or some other solution, it's also not load balanced yet (Perhaps i will use HAProxy or something). The complexity is stacking up! I would like to keep things simpler than this if possible. Is it still necessary to reverse proxy Node.js using Nginx when Varnish is also present? As even if express is slow at serving static files, they should theoretically be cached by Varnish. Or are there better ways to implement this?

    Read the article

  • Windows 7 Boot Screen Resolution/Aspect Ratio

    - by Joe
    I have been using my Windows 7 PC with my 46" Samsung HDTV with 1920x1080 resolution. The boot screen seems to always be at 1024x768 resolution and the aspect ratio doesn't match the 1080p ratio. I read on a msdn blog that microsoft only made the boot screen in one type of resolution so that cannot be changed. The result is that on my TV the boot screen seems to look stretched out. Is there a way to make change the aspect ratio or crop it in someway? Is there any method at all to make it look normal or just better without being stretchy looking?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >