Search Results

Search found 19308 results on 773 pages for 'network efficiency'.

Page 414/773 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Analyzing Web Application Speed

    - by Amy
    I'm a bit confused because the logical/programmer brain in me says that if all things are constant, the speed of a function must be constant. I am working on a PHP web application with jqGrid as a front end for showing the data. I am testing on my personal computer, so network traffic does not apply. I make an HTTP request to a PHP function, it returns the data, and then jqGrid renders it. What has me befuddled is that sometimes, Firebug reports that this is taking 300-600 milliseconds sometimes, and sometimes, it's taking 3.68 seconds. I can run the request over and over again, with very radically different response times. The query is the same. The number of users on the system is the same. No network latency. Same code. I'm not running other applications on the computer while testing. I could understand query caching improving performance on subsequent requests, but the speed is just fluctuating wildly with no rhyme or reason. So, my question is, what else can cause such variability in the response time? How can I determine what's doing it? More importantly, is there any way to get things more consistent?

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • Sending Email over VPN SmtpException net_io_connectionclosed

    - by Holy Christ
    I am sending an email from a WPF application. When sending as a domain user on the network, the emails sends as expected. However, when I attempt to send email over a VPN connection, I get the following exception: Exception: System.Net.Mail.SmtpException: Failure sending mail. --- System.IO.IOException: Unable to read data from the transport connection: net_io_connectionclosed. at System.Net.Mail.SmtpReplyReaderFactory.ProcessRead(Byte[] buffer, Int32 offset, Int32 read, Boolean readLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLines(SmtpReplyReader caller, Boolean oneLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLine(SmtpReplyReader caller) at System.Net.Mail.SmtpConnection.GetConnection(String host, Int32 port) at System.Net.Mail.SmtpTransport.GetConnection(String host, Int32 port) at System.Net.Mail.SmtpClient.GetConnection() at System.Net.Mail.SmtpClient.Send(MailMessage message) I have tried using impersonation as well as setting the Credentials on the SmtpClient. Neither seem to work: using (new ImpersonateUser("myUser", "MYDOMAIN", "myPass")) { var client = new SmtpClient("myhost.com"); client.UseDefaultCredentials = true; client.Credentials = new NetworkCredential("myUser", "myPass", "MYDOMAIN"); client.Send(mailMessage); } I've also tried using Wireshark to view the message over the wire, but I don't know enough about SMTP to know what I'm looking for. One other variable is that the machine I'm using on the VPN is Vista Business and the machine on the network is Win7. I don't think it's related, but then I wouldn't be asking if I knew the issue! :) Any ideas?

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • Setup SVN/LAMP/Test Server/ on linux, where to start?

    - by John Isaacks
    I have a ubuntu machine I have setup. I installed apache2 and php5 on it. I can access the web server from other machines on the network via http://linux-server. I have subversion installed on it. I also have vsftpd installed on it so I can ftp to it from another computer on the network. Myself and other users currently use dreamweaver to checkin-checkout files directly from our live site to make changes. I want the connect to the linux server from pc. make the changes on the test server until ready and then pushed to the live site. I want to use subversion also into this workflow as well. but not sure what the best workflow is or how to set this up. I have no experience with linux, svn, or even using a test server, the checkin/out we are currently doing is the way I have always done it. I have hit many snags already just getting what I have setup because of my lack of knowledge in the area. Dreamweaver 5 has integration with subversion but I can't figure out how to get it to work. I want to setup and create the best workflow possible. I dont expect anyone to be able to give me an answer that will enlighten me enough to know everthing I need to know to do what I want to do (altough if possible that would be great) instead I am looking for maybe a knowledge path like answer. Like a general outline of what I need to do accompanied with links to learn how to do it. like read this book to learn linux, then read this article to learn svn, etc., then you should know what to do. I would be happy just getting it all setup, but I would like to know what I am actually doing while setting it up too.

    Read the article

  • How to find an embedded platform?

    - by gmagana
    I am new to the locating hardware side of embedded programming and so after being completely overwhelmed with all the choices out there (pc104, custom boards, a zillion option for each board, volume discounts, devel kits, ahhh!!) I am asking here for some direction. Basically, I must find a new motherboard and (most likely) re-implement the program logic. Rewriting this in C/C++/Java/C#/Pascal/BASIC is not a problem for me. so my real problem is finding the hardware. This motherboard will have several other devices attached to it. Here is a summary of what I need to do: Required: 2 RS232 serial ports (one used all the time for primary UI, the second one not continuous) 1 modem (9600+ baud ok) [Modem will be in simultaneous use with only one of the serial port devices, so interrupt sharing with one serial port is OK, but not both] Minimum permanent/long term storage: Whatever O/S requires + 1 MB (executable) + 512 KB (Data files) RAM: Minimal, whatever the O/S requires plus maybe 1MB for executable. Nice to have: USB port(s) Ethernet network port Wireless network Implementation languages (any O/S I will adapt to): First choice Java/C# (Mono ok) Second choice is C/Pascal Third is BASIC Ok, given all this, I am having a lot of trouble finding hardware that will support this that is low in cost. Every manufacturer site I visit has a lot of options, and it's difficult to see if their offering will even satisfy my must-have requirements (for example they sometimes list 3 "serial ports", but it appears that only one of the three is RS232, for example, and don't mention what the other two are). The #1 constraint is cost, #2 is size. Can anyone help me with this? This little task has left me thinking I should have gone for EE and not CS :-). EDIT: A bit of background: This is a system currently in production, but the original programmer passed away, and the current hardware manufacturer cannot find hardware to run the (currently) DOS system, so I need to reimplement this in a modern platform. I can only change the programming and the motherboard hardware.

    Read the article

  • LDAP/AD Integrated Group/Membership Management Package suitable for embedding in an application

    - by Ernest
    In several web applications, it is often necessary to define groups of users for purposes of membership as well as role management. For example, in one of our applications we would like to user a group of "Network Engineers" and another group that consists of "Managers" of such Network Engineers. The information we need is contact details of members of each group. So far, we have written our own tools to allow the administrator of the application to add/delete/move groups and their memberships and either store them in a XML file or a database. Increasingly, companies already have the groups we want defined in LDAP/AD, so it would be best to create a pointer in our application to the correspoding group in LDAP. Although there are a number of LDAP libraries and LDAP browsers available and we could code this and provide a web front end to get a list of available groups and their members, we are wondering if there is already a "component framework" available that would readily provide this LDAP browsing functionality that we could just embed this into our application. Something between a library and a full LDAP browser product ? (To clarify, the use case is for an admin of our web application to create a locally relevant group name and then map it to an exiting LDAP group. To enable this in the UI, we would like to present a way for the admin to browse available groups in the company LDAP server, view their membership, and select the LDAP group they would like to map to the locally relevant group name. In a second step, we would then synchronize the members of that LDAP group and their contact details to a store in our application ) Appreciate any pointers.

    Read the article

  • How do I connect to SQL Server with VB?

    - by Wayne Werner
    Hi, I'm trying to connect to a SQL server from VB. The SQL server is across the network uses my windows login for authentication. I can access the server using the following python code: import odbc conn = odbc.odbc('SignInspection') c = conn.cursor() c.execute("SELECT * FROM list_domain") c.fetchone() This code works fine, returning the first result of the SELECT. However, I've been trying to use the SqlClient.SqlConnection in VB, and it fails to connect. I've tried several different connection strings but this is the current code: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim conn As New SqlClient.SqlConnection conn.ConnectionString = "data source=signinspection;initial catalog=signinspection;integrated security=SSPI" Try conn.Open() MessageBox.Show("Sweet Success") 'Insert some code here, woo Catch ex As Exception MessageBox.Show("Failed to connect to data source.") MessageBox.Show(ex.ToString()) Finally conn.Close() End Try End Sub It fails miserably, and it gives me an error that says "A network-related or instance-specific error occurred... (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I'm fairly certain it's my connection string, but nothing I've found has given me any solid examples (server=mySQLServer is not a solid example) of what I need to use. Thanks! -Wayne

    Read the article

  • Should I close sockets from both ends?

    - by Roman
    I have the following problem. My client program monitor for availability of server in the local network (using Bonjour, but it does not rally mater). As soon as a server is "noticed" by the client application, the client tries to create a socket: Socket(serverIP,serverPort);. At some point the client can loose the server (Bonjour says that server is not visible in the network anymore). So, the client decide to close the socket, because it is not valid anymore. At some moment the server appears again. So, the client tries to create a new socket associated with this server. But! The server can refuse to create this socket since it (server) has already a socket associated with the client IP and client port. It happens because the socket was closed by the client, not by the server. Can it happen? And if it is the case, how this problem can be solved? Well, I understand that it is unlikely that the client will try to connect to the server from the same port (client port), since client selects its ports randomly. But it still can happen (just by chance). Right?

    Read the article

  • Ruby On Rails with HTML5 offline apps - Firefox does not cache the application.manifest but Safari does

    - by hoitomt
    I'm working off of this Railscast tutorial: episode 247 I’m up to this point in the tutorial: added the rack-offline gem, added the application.manifest route, and added a reference to the manifest in the html tag. Right before it starts talking about problems with caching. Safari works as intended – When the server is running the page is served. From the server logs I can see that Safari is making a single request to the server every time for the items page. When I turn off the server the page displays as well, even after shutting down the browser and restarting. It appears to be pulling from the application.manifest (cache manifest). Firefox does not work as intended – When accessing the page for the first time, Firefox lets me know that the web page wants to store something locally, I allow. After clicking on allow, Firefox makes 5 requests to the server for the page (from the server log). The hash is different in every request. Is it is possible that the changing hash is triggering Firefox to keep trying to get the new manifest until it reaches some maximum (5 attempts)? Then, after the server is stopped, Firefox will not show the page at all. It looks like it isn’t caching the application.manifest. Firefox also gives you a way to see what sites are storing stuff locally by going to Tools/Options/Advanced/Network (Firefox/Preferences/Advanced/Network on Apple). I see localhost there but the size is 0 bytes. So for some reason, Firefox is not downloading my application.manifest along with the files

    Read the article

  • Suggestions for Backup solution

    - by jiewmeng
    i am considering between windows home server simple nas extra HDD's in desktop btw, i will be the main user i am looking to fulfil the following needs: reliability (i am think RAID 1 or 5) not so prone to virus/malware infections (will using a separate NAS or home server help? say windows home server is still a windows pc except separated by network?) power efficiency (eg. spin down when not in use) download (eg. i may want to dl big files/torrents overnight and i may not want to use a full powered PC for it? does a full pc vs NAS provide significant power usage to justify cost of new system esp. since i am only user?) performance (i guess i like to write/access my files fast, on 2nd thought, maybe for backup i can forgo this? maybe for a WD Green HDD? but how much slower will it be? plus since i am the only user, i think the whole HDD will be mine?)

    Read the article

  • How to better copy&paste big files over RDP?

    - by WebMAOhist
    Recently I was making a few attempts to copy&paste a big (1.2 GB) file to remote computer over RDP. The remote computer is virtual testing machine with MS Windows Server 2008 Datacenter. First I tried to copy&paste before midnight when the transfer speed was limited by client computer ISP to 100 kB/s. So, it required a few hours and I was forced to cancel transfer since remote desktop became too unresponsive and sluggish (slow). So, I re-started it over midnight when my local transfer speed is over 4 GB/s 4MB/s (sorry for typo). So, my impression is that independently on speed (broadband) of copy&paste transfer the remote computer becomes sluggish while copying over RDP. At the same time downloading from internet doesn't make remote host sluggish. AFAIU, it is because clipboard of remote computer and so its memory becomes overloaded by transfer. How can I control (restrict) the usage of clipboard for specific process (pasting of file)? What are the possible way to control it? Update: After reading that slow speed of transfer is caused by encryption used for copy&pasting over RDP and since I believe I am more interested in overall efficiency: both the time, or rapidness, of getting file as well as possibility to work without waiting, I changed the question title from: How to control the usage of remote desktop clipboard usage for pasting a big file? to How to better copy&paste big files over RDP? For example, is it better to copy&paste one huge (zip) archive or unzip it and copy paste a folder with unzipped files? And more exactly I wanted to ask: What are possible ways to improve overall experience: the speed of transfer (i.e. availability of needed file) responsiveness of remote host (making remote coputer available for work before completion of copy&pasting)?

    Read the article

  • Can not open ports in iptables on CentOS 5??

    - by abszero
    I am trying to open up ports in CentOS's firewall and am having a terrible go at it. I have followed the "HowTo" here: http://wiki.centos.org/HowTos/Network/IPTables as well as a few other places on the Net but I still can't get the bloody thing to work. Basically I wanted to get two things working: VNC and Apache over the internal network. The problem is that the firewall is blocking all attempts to connect to these services. Now if I issue service iptables stop and then try to access the server via VNC or hit the webserver everything works as expected. However the moment I turn iptables back on all of my access is blocked. Below is a truncated version of my iptables file as it appears in vi -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5801 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5901 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 6001 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT Really I would just be happy if I could get port 80 opened up for Apache since I can do most stuff via putty but if I could figure out VNC as well that would be cool. As far as VNC goes there is just a single/user desktop that I am trying to connect to via: [ipaddress]:1 Any help would be greatly appreciated!

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • Outgoing Emails (Git Patches) Blocked by Windows Live

    - by SteveStifler
    Just recently I dove into the VideoLAN open source project. This was my first time using git, and when sending in my first patch (using git send-email --to [email protected] patches), I was sent the following message from my computer's local mail in the terminal (I'm on OSX 10.6 by the way): Mail rejected by Windows Live Hotmail for policy reasons. We generally do not accept email from dynamic IP's as they are not typically used to deliver unauthenticated SMTP e-mail to an Internet mail server. http:/www.spamhaus.org maintains lists of dynamic and residential IP addresses. If you are not an email/network admin please contact your E-mail/Internet Service Provider for help. Email/network admins, please visit http://postmaster.live.com for email delivery information and support They must think I'm a spammer. I have a dynamic IP and my ISP (Charter) won't let me get a static one, so I tried editing git preferences: git config --global user.email "[email protected]" to my gmail account. However I got the exact same message again. My guess is that it has something to do with the native mail's preferences, but I have no idea how to access them or modify them. Anybody have any ideas for solving this? Thanks!

    Read the article

  • Why did you start with Linux ? And why did you continue using it ?

    - by Stefano Borini
    I'd like to know the reasons that moved you towards Linux. Personally, I started because we had to use a Digital for the Fortran 77 exercises during my first year at the university. Linux was installed on many university computers, and I got interested in it. I always liked to code (on the C64) in basic and assembler, but I knew nothing about other languages. I soon discovered a chat engine called NUTS, and the idea of becoming proficient in C appealed me, so I started hacking the code. To do so, I needed a Unix at home, so I bought a Slackware 3.4 and installed it on my Pentium 166. I then continued using it for many years, reason being that I had pleasure in learning new things and the openness of information about the internals. It was a great learning platform. I then moved to osx because I enjoy the power of Unix with the beauty and efficiency of its interface. I am interested in your answer because I believe that the panorama has changed somehow. Although I still guess to find many "hackers" interested in Linux for the sake of knowledge, I also believe that there are other reasons (work, friends, bought a netbook).

    Read the article

  • Windows based development environment: HyperV, VMWare, or VirtualBox on development machine?

    - by bleepzter
    I am a software engineer with a little bit of an informal "support" functionality... I am trying to figure out what is the best possible approach to employing virtualization technologies into our development process. Since the code we develop is server-centric, testing it often requires a VM with specific software requirements. I used to use VM Ware player (free version) to run my VM's until both of my laptops started exhibiting issues with corrupted windows 7 services and dying hard drives. All leads pointed to VMWare, which by the way seems to be a solid product if you pay for the Workstation edition ($300). On a side note, I have always been a fan of the Windows Server product line. I think it makes for one of the best development environments out there - it is highly scalable, highly reliable, and very efficient. So to be fair I replaced the drives of the laptops and installed Windows Server 2008R2, VS2010 Ultimate SP1, SQL Server 2008R2, TFS Server 2010 and all other tools and API's needed do do my work properly. So now I am stuck with a bunch of VMWare VMs. I don't want to repeat of what happened before, and I certainly don't want to bog down my machine with an inefficient hypervisor or services that are not needed. Futhermore the VMDK hard-disk format used by VMWare is not compatible with the VHD format of Hyper V. It is my understanding that converting from one format to the other can only happen by Microsoft System Center Virtual Machine which I have downloaded from MSDN and ready to install. I guess the question at this point is: Does SCVM run as another service in Windows? Is it a memory hog? What is a better virtualization technology - Hyper-V or Virtual Box in terms of efficiency ease of use and most importantly - memory footprint? (Keep in mind the development environment already has a ton of services running such as TFS Server, SQL Server, IIS, etc...) How would you advise to proceed at this point so that the VMs are still used in the test process? Thanks Martin

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • High-performance Academic Server [closed]

    - by PHPsmith
    Suppose I want to build a server for the university's academic interests. The server is dedicated only to a site, where users (students and lecturers) just view and fill the academic data. But at a time (e.g. once a semester), about 12,000 students will access the site simultaneously. Due to limitation of resources, I have to build the server using free software (except for the operating system Windows 7, the university has been prepared). The hardware is also limited to the usual 4-core computers (eg, Ivy Bridge Intel Core i7-3770) with approximately 16GB of memory (DDR3 1600 MHz), equipped with an RJ-45 port (Intel 82 579 Gigabit Ethernet). With all these limitations, I have to choose the software (web server, database, etc) are appropriate for this purpose is achieved. I decided to create a site in PHP. Please help me by answering the following questions based on your expertise. (my prime candidate software to consider after googling) Web server which is faster & stable & secure, when implemented and optimized for PHP? And why? (nginx) PHP accelerator which is faster & stable & compatible with the selected web server? And why? (APC with Zend Optimizer+) Database which is faster & stable & secure, when implemented and optimized for selected web server and selected PHP accelerator? (MySQL) Are there any errors that have been or will be happening from my condition is? If there is, please enlighten me? Is there anything else I need to know in order to achieve this goal? If there is, please enlighten me? I understand that the performance also depends on the implementation of source-code program, so I assume it will create a site with the best efficiency (e.g. using AJAX).

    Read the article

  • What's best choice career-wise, to know a little about a lot or a lot about a little?

    - by nimo
    I work as a developer at a rather small company and we are providing a web application that is used by a big base of customers. Because we are so small everyone have to be able to do a lot of different tasks. It ranges from advanced support, developing the product (programming: c/c++, c#, php, sql, javascript, html, css), handle network configuration and network related issues and even sometimes go on sales meetings with potential customers. My concern is that I don't really specialize in any specific area. I know and learn little about a lot. I have graduated from school two years ago and this is my first real employment and when I look at other positions out there they always require so and so many years of experience in a specific area (for example 5 years of C#). For me to get that kind of specialized experience will be really hard at my current job. My question for you is what is, in your opinion, best choice career-wise, to know a little about a lot or a lot about a little? What path did you take? pros and cons that comes with that choice.

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • asp.net C# windows authentication iss config

    - by user1566209
    I'm developing a webpage where a need to know the users windows authentication values, more precisely the name. Others developments have been done with this kind of authentication but sadly for me their creators are long gone and i have no contact or documentation. I'm using Visual Studio 2008 and i'm accessing a webservice that is in a remote server. The server is a windows server 2008 r2 standard and is using ISS version 7.5. Since i have the source code of the other developments what i did was copy paste and was working fine when i was calling the webservice that was in my machine (localhost). The code is the following: //1st way WindowsPrincipal wp = new WindowsPrincipal(WindowsIdentity.GetCurrent()); string strUser = wp.Identity.Name;//ALWAYS GET NT AUTHORITY\NETWORK SERVICE //2nd way WindowsIdentity winId = WindowsIdentity.GetCurrent(); WindowsPrincipal winPrincipal = new WindowsPrincipal(winId); string user = winPrincipal.Identity.Name;//ALWAYS GET NT AUTHORITY\NETWORK SERVICE //3rd way IIdentity WinId = HttpContext.Current.User.Identity; WindowsIdentity wi = (WindowsIdentity)WinId; string userstr = wi.Name; //ALWAYS GET string empty btn_select.Text = userstr; btn_cancelar.Text = strUser; btn_gravar.Text = user; As you can see i have here 3 ways to get the same and in a sad manner show my user's name. As for my web.config i have: <authentication mode="Windows"/> <identity impersonate="true" /> In the IIS manager i have tried lots of combination of enable and disable between Anonymous Authentication, ASP.NET Impersonation, Basic Authentication, Forms Authentication and Windows Authentication. Can please someone help me?? NOTE: The respective values i get from each try are in the code

    Read the article

  • Why is only one Excel spreadsheet crippled, but others are fine?

    - by Dallas
    I have an inherited spreadsheet that I really don't want to rebuild at the moment. It's a simple small workbook that is small (< 200 rows that don't even reach to AA) and does nothing more than calculate some totals within the same worksheets. No macros, no external data sources, nothing beyond basic formatting of dates, numbers and strings. I see importing data from CSV/text has created many many workbook connections over time, but even if I delete them all (there were hundreds) it makes no difference in performance. Even clicking to simply change focus from cell to cell takes 10+ seconds, adorned by the spinning cursor and (Not Responding) appending to the title bar and the application locking up. The program seems to "recover" every time, but efficiency of editing this file is obviously seriously handicapped. All other files seem fine in Excel, and other programs have no apparent performance issues. I see Excel is chewing up CPU but I'm not sure how to narrow down what process or service is "clashing" with Excel. I tried the same file on other computers and performance is fine. If I turn off all start-up services and run only Excel, performance is restored... until I start using other programs and then it bogs down again. At this point, I would entertain almost any idea, theory or suggestion that helps pinpoint, solve or work around the issue.

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >