Search Results

Search found 18521 results on 741 pages for 'tcp window scaling'.

Page 17/741 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • [C#] Async threaded tcp server

    - by mark_dj
    I want to create a high performance server in C# which could take about ~10k clients. Now i started writing a TcpServer with C# and for each client-connection i open a new thread. I also use one thread to accept the connections. So far so good, works fine. The server has to deserialize AMF incoming objects do some logic ( like saving the position of a player ) and send some object back ( serializing objects ). I am not worried about the serializing/deserializing part atm. My main concern is that I will have a lot of threads with 10k clients and i've read somewhere that an OS can only hold like a few hunderd threads. Are there any sources/articles available on writing a decent async threaded server ? Are there other possibilties or will 10k threads work fine ? I've looked on google, but i couldn't find much info about design patterns or ways which explain it clearly

    Read the article

  • In TCP/IP terms, how does a download speed limiter in an office work?

    - by TessellatingHeckler
    Assume an office of people, they want to limit HTTP downloads to a max of 40% bandwidth of their internet connection speed so that it doesn't block other traffic. We say "it's not supported in your firewall", and they say the inevitable line "we used to be able to do it with our Netgear/DLink/DrayTek". Thinking about it, a download is like this: HTTP GET request Server sends file data as TCP packets Client acknowledges receipt of TCP packets Repeat until download finished. The speed is determined by how fast the server sends data to you, and how fast you acknowledge it. So, to limit download speed, you have two choices: 1) Instruct the server to send data to you more slowly - and I don't think there's any protocol feature to request that in TCP or HTTP. 2) Acknowledge packets more slowly by limiting your upload speed, and also ruin your upload speed. How do devices do this limiting? Is there a standard way?

    Read the article

  • How to avoid/prevent the system to draw/redraw/refresh/paint a WPF window

    - by Leo
    I have an Application WPF with Visual C# (using visual studio 2010) and I want to draw OpenGL scenes on the WPF window itself. As for the OpenGL drawinf itself, I'm able to draw it w/o problems, meaning, I can create GL render context from the WPF main window itself (no additional OpenGL control, no win32 window inside WPF window), use GL commands and use swapbuffer (all this is done inside a dll - using C - I created myself). However, I have an annoying flickering when, for example, I resize the window. I overrided the OnRender method to re-draw with opengl, but after this, the window is redraw with the background color. It's likely that the system is automatically redrawing it. With WindowForms I'm able to prevent the system to redraw automatically (defining some ControlStyles to true or false, like UserPaint = true, AllPaintingInWmPaint = true, Opaque = true, ResizeRedraw = true, DoubleBuffer = false), but, aside setting Opacity to 1, I don't know how to do all that with WPF. I was hoping that overriding OnRender with no operations inside it would avoid redrawin, but somehow the system still draw the background. Do anyone know how to prevent system to redraw the window? Thx for your time

    Read the article

  • Is it possible to add tcp autotuning to windows xp?

    - by Caspin
    I have a network application that needs to send messages at 60 times a second. The messages are usually 300-400 bytes, but can be as large as 1500. The default setting for SO_SNDBUF is too small and limits the number of message that can be sent if the network latency is anything greater then 100ms. The naive solution is to just bump the SO_SNDBUF size to to something large. However, depending on the latency and the packet size that could be anywhere from 64K to 8MB. One of Vista's new features is TCP autotuning. Autotuning monitors the tcp connection and dynamically adjust the buffer sizes to allow for optimal communication. I would like to use auto tuning on our windows xp machine so I don't need to guess what my buffers sizes should be. Is there a way to install either a microsoft or 3rd party tcp autotuner on windows xp?

    Read the article

  • Is it possible to add tcp autotuning to windows xp?

    - by Caspin
    I have a network application that needs to send messages at 60 times a second. The messages are usually 300-400 bytes, but can be as large as 1500. The default setting for SO_SNDBUF is too small and limits the number of message that can be sent if the network latency is anything greater then 100ms. The naive solution is to just bump the SO_SNDBUF size to to something large. However, depending on the latency and the packet size that could be anywhere from 64K to 8MB. One of Vista's new features is TCP autotuning. Autotuning monitors the tcp connection and dynamically adjust the buffer sizes to allow for optimal communication. I would like to use auto tuning on our windows xp machine so I don't need to guess what my buffers sizes should be. Is there a way to install either a microsoft or 3rd party tcp autotuner on windows xp?

    Read the article

  • Iframe/Popup redirecting opener window

    - by Bara
    Hello all, I have a page located at x.com. On this page is a button that, when clicked, will launch a new window (using javascript's window.open() method) to a page that is located at z.com. The popup does a few things, then redirects the original window (the opener, x.com) to a different page based on some parameters defined in the popup. This works fine in Firefox/Chrome, but not in IE. In IE (8 specifically, but I believe 7 also has this problem) the original window (the opener) is not redirected. Instead, a new window comes up and THAT window is redirected. I've tried many different methods to try and get this to work, including changing the popup to an iframe loaded on the page and having a function on the opener that the popup/iframe call. The problem seems to be that IE refuses to allow cross-domain sites to talk to each other via javascript. Is there a way around this? How can I get the parent window to redirect to a page based on parameters in a popup or iframe? Bara

    Read the article

  • TCP 3 way handshake

    - by Tom
    Hi, i'm just observing what NMAP is doing for the 3 ports it reports are open. I understand what a half-scan attack is, but what's happening doesnt make sense. NMAP is reporting ports 139 are 445 are open..... all fine. But when i look at the control bits, NMAP never sends RST once it has found out the port is open, It does this for port 135- but not 139 and 445. This is what happens: (I HAVE OMITTED THE victim's replies) Sends a 2 (SYN) Sends a 16 (ACK) Sends a 24 (ACK + PST) Sends a 16 (ACK) Sends a 17 (ACK + FIN) I dont get why NMAP doesnt 'RST' ports 139 and 445??

    Read the article

  • C# TCP First Message Delay

    - by ikurtz
    greetings, i am writng a socket program using sockets in c# (asynchronous). the issue is, when a client connects to the server it kinda happens quiet fast. then.. when the first message is sent there is a delay in responding. this only happens to the very first data being sent over the connection. and boh client and server suffers from this behaviour. what is this delay? is there a way to get rid of this? many thanks.

    Read the article

  • ING: Scaling Role Management and Access Certification to Thousands of Applications

    - by Tanu Sood
    Organizations deal with employee and user access certifications in different ways.  There’s collation of multiple spreadsheets, an intense two-week exercise by managers or use of access certification tools to do so across a handful of applications. But for most organizations compliance is about certifying user access for thousands of employees across hundreds of systems. Managing and auditing millions of entitlement combinations on a periodic basis poses a huge scale challenge. ING solved the compliance scale challenge using an Identity Platform approach. Join the live webcast featuring ING’s enterprise architect, Mark Robison, as he discusses how a platform approach offers value that is greater than the sum of its parts and enables ING to successfully meet their security and compliance goals. Mark will also share his implementation experiences and discuss the key requirements to manage the complexity and scale of access certification efforts at ING. Mark will be joined by Neil Gandhi, Principal Product Manager for Oracle Identity Analytics. Live WebcastING: Scaling Role Management and Access Certification to Thousands of ApplicationsWednesday, April 11th at 10 am Pacific/ 1 pm EasternRegister Today

    Read the article

  • What is the rationale behind snazzy Window Managers/Composers?

    - by Emanuele
    This is more of a generic question, based on trying out Window Managers like Awesome, Mate and others. To me looks like that other Window Managers like Gnome3 and/or Unity are heavy and pointless. I do understand that having all the composed UIs is more pleasant for the eye, but apart that, what are the other major benefits? To make an example, when I run the game Heroes of Newerth (using nVidia drivers) under: Unity : the FPS drops sharply Gnome3 : FPS is ok, but X and other processes use 15~20% of CPU and quite some additional memory Awesome : FPS is ok, and other processes use very little memory and CPU Below some numbers regarding what I'm saying (please note my system is 64 bit, AMD Phenom II X4, 8 GB RAM, nd nVidia 470 GTX, SSD disk). All data is sorted by mem usage (watch -d -n 10 "ps -e -o pcpu,pmem,pid,user,cmd --sort=-pmem | head -20"); again note that CPU time of ./hon-x86_64 might be different due to the fact I can't take the snapshot of the system during exactly same time. Awesome: %CPU %MEM PID USER CMD 91.8 21.6 3579 ema ./hon-x86_64 2.4 0.9 3223 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.6 0.4 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- -noshell -noinp 0.3 0.2 3602 ema gnome-terminal 0.0 0.2 2698 ema /usr/bin/python /usr/lib/desktopcouch/desktopcouch-service Gnome3: %CPU %MEM PID USER CMD 82.7 21.0 5528 ema ./hon-x86_64 17.7 1.7 5315 ema /usr/bin/gnome-shell 5.8 1.2 5062 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.0 0.4 5657 ema /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 0.7 0.3 5331 ema nautilus -n 1.6 0.3 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- - 0.9 0.2 5451 ema gnome-terminal 0.1 0.2 5400 ema /usr/bin/python /usr/lib/desktopcouch/desktopcouch-service Unity 3D: %CPU %MEM PID USER CMD 87.2 21.1 6554 ema ./hon-x86_64 10.7 2.6 6105 ema compiz 17.8 1.1 5842 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.3 0.9 6672 root /usr/bin/python /usr/sbin/aptd 0.4 0.4 6606 ema /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 0.5 0.3 6115 ema nautilus -n 1.5 0.3 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- -noshell -noinput -sasl errl 0.3 0.2 6180 ema /usr/lib/unity/unity-panel-service So my point is, what's the rationale behind going towards such heavy WMs/Composers?

    Read the article

  • scaling point sprites with distance

    - by Will
    How can you scale a point sprite by its distance from the camera? GLSL fragment shader: gl_PointSize = size / gl_Position.w; seems along the right tracks; for any given scene all sprites seem nicely scaled by distance. Is this correct? How do you compute the proper scaling for my vertex attribute size? I want each sprite to be scaled by the modelview matrix. I had played with arbitrary values and it seems that size is the radius in pixels at the camera, and is not in modelview scale. I've also tried: gl_Position = pMatrix * mvMatrix * vec4(vertex,1.0); vec4 v2 = pMatrix * mvMatrix * vec4(vertex.x,vertex.y+0.5*size,vertex.z,1.0); gl_PointSize = length(gl_Position.xyz-v2.xyz) * gl_Position.w; But this makes the sprites be bigger in the distance, rather than smaller:

    Read the article

  • Problem when scaling game screen in Libgdx

    - by Nicolas Martel
    Currently, I'm able to scale the screen by applying this bit of code onto an OrthographicCamera Camera.setToOrtho(true, Gdx.graphics.getWidth() / 2, Gdx.graphics.getHeight() / 2); But something quite strange is happening with this solution, take a look at this picture of my game below Seems fine right? But upon further investigation, many components are rendered off by one pixels, and the tiles all are. Take a closer look I circled a couple of the errors. Note that the shadow of the warrior I circled appears fine for the other warriors. Also keep in mind that everything is rendered at pixel-perfect precision when I disable the scaling. I actually thought of a possible source for the problem as I'm writing this but I decided to still post this because I would assume somebody else might run into the same issue.

    Read the article

  • Question about creating a sprite based 2-D Side Scroller with scaling/zooming

    - by Arthur
    I'm just wondering if anyone can offer any advice on how best to go about creating a 2-D game with zooming/scaling features akin to the early Samurai Showdown games. In this case it would be a side scroller a la Metal Slug, the zooming would come in as more enemy sprites entered the screen, or when facing a large sized boss. A feature that would be both cosmetic as well as functional to the game. I've done some reading and noticed a few suggestions that included drawing different sized sprites, a standard size and zoomed out size. Any thoughts? Thanks for your time.

    Read the article

  • Re-sizing the form without scaling the GUI

    - by Bmoore
    I am writing a turn based strategy game in C#. My GUI implementation consists of class that extends Form containing a class that extends Panel. When I render the GUI I draw to the paint method in the panel. I am trying to figure out what is the best way for handling form re-size events. I know I want a minimum window size, but I would prefer to not have a maximum or a set size. Ideally the GUI would reveal more/less of the map as the user changes the window size. I would like to avoid scaling the graphics if at all possible. What is the best way to handle re-size events?

    Read the article

  • How is "Esc" key handled in WPF Window?

    - by Markus2k
    I want the Escape key to close my WPF window. However if there is a control that can consume that Escape key, I don't want to close the Window. There are multiple solutions on how to close the WPF Window when ESC key is pressed. eg. http://stackoverflow.com/questions/419596/how-does-the-wpf-button-iscancel-property-work However this solution closes the Window, without regard to if there is an active control that can consume the Escape key.

    Read the article

  • how to create window that opened when close main window??

    - by Abeer
    Hi everyone .. Iam a absolute Beginner in QT.. i am trying to create window that has only text and one push button when you press it, you will get another window that has menu for program .. but Unfortunately, i didn't know how can i create new window and connect it with main window! so, i need to helping you

    Read the article

  • Deferred rendering with VSM - Scaling light depth loses moments

    - by user1423893
    I'm calculating my shadow term using a VSM method. This works correctly when using forward rendered lights but fails with deferred lights. // Shadow term (1 = no shadow) float shadow = 1; // [Light Space -> Shadow Map Space] // Transform the surface into light space and project // NB: Could be done in the vertex shader, but doing it here keeps the // "light shader" abstraction and doesn't limit the number of shadowed lights float4x4 LightViewProjection = mul(LightView, LightProjection); float4 surf_tex = mul(position, LightViewProjection); // Re-homogenize // 'w' component is not used in later calculations so no need to homogenize (it will equal '1' if homogenized) surf_tex.xyz /= surf_tex.w; // Rescale viewport to be [0,1] (texture coordinate system) float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = -surf_tex.y * 0.5f + 0.5f; // Half texel offset //shadow_tex += (0.5 / 512); // Scaled distance to light (instead of 'surf_tex.z') float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; //float rescaled_dist_to_light = surf_tex.z; // [Variance Shadow Map Depth Calculation] // No filtering float2 moments = tex2D(ShadowSampler, shadow_tex).xy; // Flip the moments values to bring them back to their original values moments.x = 1.0 - moments.x; moments.y = 1.0 - moments.y; // Compute variance float E_x2 = moments.y; float Ex_2 = moments.x * moments.x; float variance = E_x2 - Ex_2; variance = max(variance, Bias.y); // Surface is fully lit if the current pixel is before the light occluder (lit_factor == 1) // One-tailed inequality valid if float lit_factor = (rescaled_dist_to_light <= moments.x - Bias.x); // Compute probabilistic upper bound (mean distance) float m_d = moments.x - rescaled_dist_to_light; // Chebychev's inequality float p = variance / (variance + m_d * m_d); p = ReduceLightBleeding(p, Bias.z); // Adjust the light color based on the shadow attenuation shadow *= max(lit_factor, p); This is what I know for certain so far: The lighting is correct if I do not try and calculate the shadow term. (No shadows) The shadow term is correct when calculated using forward rendered lighting. (VSM works with forward rendered lights) With the current rescaled light distance (lightAttenuation.y is the far plane value): float rescaled_dist_to_light = dist_to_light / LightAttenuation.y; The light is correct and the shadow appears to be zoomed in and misses the blurring: When I do not rescale the light and use the homogenized 'surf_tex': float rescaled_dist_to_light = surf_tex.z; the shadows are blurred correctly but the lighting is incorrect and the cube model is no longer lit Why is scaling by the far plane value (LightAttenuation.y) zooming in too far? The only other factor involved is my world pixel position, which is calculated as follows: // [Position] float4 position; // [Screen Position] position.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above position.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component position.z = 1.0 - position.z; position.w = 1.0; // 1.0 = position.w / position.w // [World Position] position = mul(position, CameraViewProjectionInverse); // Re-homogenize position (xyz AND w, otherwise shadows will bend when camera is close) position.xyz /= position.w; position.w = 1.0; Using the inverse matrix of the camera's view x projection matrix does work for lighting but maybe it is incorrect for shadow calculation? EDIT: Light calculations for shadow including 'dist_to_light' // Work out the light position and direction in world space float3 light_position = float3(LightViewInverse._41, LightViewInverse._42, LightViewInverse._43); // Direction might need to be negated float3 light_direction = float3(-LightViewInverse._31, -LightViewInverse._32, -LightViewInverse._33); // Unnormalized light vector float3 dir_to_light = light_position - position; // Direction from vertex float dist_to_light = length(dir_to_light); // Normalise 'toLight' vector for lighting calculations dir_to_light = normalize(dir_to_light); EDIT2: These are the calculations for the moments (depth) //============================================= //---[Vertex Shaders]-------------------------- //============================================= DepthVSOutput depth_VS( float4 Position : POSITION, uniform float4x4 shadow_view, uniform float4x4 shadow_view_projection) { DepthVSOutput output = (DepthVSOutput)0; // First transform position into world space float4 position_world = mul(Position, World); output.position_screen = mul(position_world, shadow_view_projection); output.light_vec = mul(position_world, shadow_view).xyz; return output; } //============================================= //---[Pixel Shaders]--------------------------- //============================================= DepthPSOutput depth_PS(DepthVSOutput input) { DepthPSOutput output = (DepthPSOutput)0; // Work out the depth of this fragment from the light, normalized to [0, 1] float2 depth; depth.x = length(input.light_vec) / FarPlane; depth.y = depth.x * depth.x; // Flip depth values to avoid floating point inaccuracies depth.x = 1.0f - depth.x; depth.y = 1.0f - depth.y; output.depth = depth.xyxy; return output; } EDIT 3: I have tried the folloiwng: float4 pp; pp.xy = input.PositionClone.xy; // Use 'x' and 'y' components already homogenized for uv coordinates above pp.z = tex2D(DepthSampler, texCoord).r; // No need to homogenize 'z' component pp.z = 1.0 - pp.z; pp.w = 1.0; // 1.0 = position.w / position.w // Determine the depth of the pixel with respect to the light float4x4 LightViewProjection = mul(LightView, LightProjection); float4x4 matViewToLightViewProj = mul(CameraViewProjectionInverse, LightViewProjection); float4 vPositionLightCS = mul(pp, matViewToLightViewProj); float fLightDepth = vPositionLightCS.z / vPositionLightCS.w; // Transform from light space to shadow map texture space. float2 vShadowTexCoord = 0.5 * vPositionLightCS.xy / vPositionLightCS.w + float2(0.5f, 0.5f); vShadowTexCoord.y = 1.0f - vShadowTexCoord.y; // Offset the coordinate by half a texel so we sample it correctly vShadowTexCoord += (0.5f / 512); //g_vShadowMapSize This suffers the same problem as the second picture. I have tried storing the depth based on the view x projection matrix: output.position_screen = mul(position_world, shadow_view_projection); //output.light_vec = mul(position_world, shadow_view); output.light_vec = output.position_screen; depth.x = input.light_vec.z / input.light_vec.w; This gives a shadow that has lots surface acne due to horrible floating point precision errors. Everything is lit correctly though. EDIT 4: Found an OpenGL based tutorial here I have followed it to the letter and it would seem that the uv coordinates for looking up the shadow map are incorrect. The source uses a scaled matrix to get the uv coordinates for the shadow map sampler /// <summary> /// The scale matrix is used to push the projected vertex into the 0.0 - 1.0 region. /// Similar in role to a * 0.5 + 0.5, where -1.0 < a < 1.0. /// <summary> const float4x4 ScaleMatrix = float4x4 ( 0.5, 0.0, 0.0, 0.0, 0.0, -0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0 ); I had to negate the 0.5 for the y scaling (M22) in order for it to work but the shadowing is still not correct. Is this really the correct way to scale? float2 shadow_tex; shadow_tex.x = surf_tex.x * 0.5f + 0.5f; shadow_tex.y = surf_tex.y * -0.5f + 0.5f; The depth calculations are exactly the same as the source code yet they still do not work, which makes me believe something about the uv calculation above is incorrect.

    Read the article

  • OpenVPN Clients using server's connection (with no default gateway)

    - by Branden Martin
    I wanted an OpenVPN server so that I could create a private VPN network for staff to connect to the server. However, not as planned, when clients connect to the VPN, it's using the VPN's internet connection (ex: when going to whatsmyip.com, it's that of the server and not the clients home connection). server.conf local <serverip> port 1194 proto udp dev tun ca ca.crt cert x.crt key x.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 9 client.conf client dev tun proto udp remote <srever> 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert x.crt key x.key ns-cert-type server comp-lzo verb 3 Server's route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 69.64.48.0 * 255.255.252.0 U 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 default static-ip-69-64 0.0.0.0 UG 0 0 0 eth0 Server's IP Tables Chain INPUT (policy ACCEPT) target prot opt source destination fail2ban-proftpd tcp -- anywhere anywhere multiport dports ftp,ftp-data,ftps,ftps-data fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:20000 ACCEPT tcp -- anywhere anywhere tcp dpt:webmin ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:imaps ACCEPT tcp -- anywhere anywhere tcp dpt:imap2 ACCEPT tcp -- anywhere anywhere tcp dpt:pop3s ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:ftp-data ACCEPT tcp -- anywhere anywhere tcp dpt:ftp ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- 10.8.0.0/24 anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain fail2ban-proftpd (1 references) target prot opt source destination RETURN all -- anywhere anywhere Chain fail2ban-ssh (1 references) target prot opt source destination RETURN all -- anywhere anywhere My goal is that clients can only talk to the server and other clients that are connected. Hope I made sense. Thanks for the help!

    Read the article

  • window services works after local account used before this account

    - by Gauls
    Hi All of a sudden my window services app after installation does not start(Some services stop automatically if they have no work to do) uses custom user, if i change the logon setting to use local system account and click start works fine, then when i go back and change to use this account (local user- custom user under user group) and then click start it works fine, why didn't work in the first place???????????? Thanks

    Read the article

  • Disable Windows Notifications from regedit in window OS

    - by user33372
    Kindly give me the full path of regedit from where I can disable the windows notification. For example if I disable the ctrl+alt+del from regedit, after disabling window send me the notification that Task manager have been disable by your administrator. I don't need these type of notification. Kindly guide me.

    Read the article

  • How do I optimize TCP stack for HTTP server?

    - by jcisio
    I have a HTTP server that serves only two kinds of page: about 10 KB and about 16 KB (both compressed, other files are from CDN). As the latency is quite high (ping takes more than 300 ms), I want to optimize the TCP stack so that client receives the whole page ASAP. Thus, I have a double question: Which parameter do I have to change (which value of TCP window)? How to change in (a Debian box, and FYI, there is a Varnish before the HTTP server).

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >