Search Results

Search found 2558 results on 103 pages for 'highly irregular'.

Page 83/103 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Space between 2 vertices on a heightmap

    - by Sietse
    First off, I am sorry for the crazy title. I couldn't really think of something else. I am working on a hobby project, it's a infinite world generated with Perlin Noise, Java and LWJGL. But I am having a problem, it is kinda hard to explain, so I made a video: http://youtu.be/D_NUBJZ_5Kw Obviously the problem is the black spaces in between all the pieces of ground. I have no idea what is causing it. I already tried making all the values doubles instead of floats, but that didn't fix it. Here is a piece of code I am using: float height2, height = (float)getHeight(x, y); height2 = (float) ((getHeight(x-1, y+1) + height) / 2); vertexhelper.addVertexColorAndTexture(x, height2, y+1, r, g, b, a, 0f, 1f); height2 = (float) ((getHeight(x+1, y+1) + height) / 2); vertexhelper.addVertexColorAndTexture(x+1, height2, y+1, r, g, b, a, 1f, 1f); height2 = (float) ((getHeight(x+1, y-1) + height) / 2); vertexhelper.addVertexColorAndTexture(x+1, height2, y, r, g, b, a, 1f, 0f); height2 = (float) ((getHeight(x-1, y-1) + height) / 2); vertexhelper.addVertexColorAndTexture(x, height2, y, r, g, b, a, 0f, 0f); I loop through this at the initialization of a chunk with x-16 and y-16. vertexhelper is a class I made that just puts everything in a array. (I am using floats here, but that's after doing the maths, so that shouldn't be a problem) I highly appreciate you reading this.

    Read the article

  • Creating dynamic icons based on data entered into database from django forms

    - by John Hoke
    So I'm using Django to create a projects page with multiple forms for each project. Let's call them form 1, 2, 3, and 4. Once you create a project you can fill out any of these forms. I want to create "buttons" or links for each one of the forms that would show up on the main page. Now this is the part I need help with: Step 1. I want it so that if you click on a button for a form (say form 1) and none exists for that project yet a pop up would come up saying "This form does not exist yet, are you sure you want to create one?". And if you'd answer yes you would be directed to the form page. Step 2. But if that form does exist, I don't want any pop up to open and I want the link to take the user directly to that page. Step 3. My next problem is this. These forms are in order, so if you didn't create form 1 but created form 2, I don't want to give the user access to form 1. So in this scenario, if you click on form 1 I want a pop up to open and say "This form can no longer be created", and the link wouldn't function anymore. Basically the button will have 3 function. First it should look at the database and if data for that specific form exists it should do "Step 2", if data for that form and the proceeding forms don't exist it should do "Step 1", and if data for that form doesn't exist but data for proceeding form's does exist is should do "Step 3". Is this possible? Please help as I need to find a solution to this soon. Any help would be highly appreciated. Thank you

    Read the article

  • Length of a string in pixels

    - by Rose
    Guys, I'm populating a dropDownList with arrayCollection of strings. I want the width of the drop down list control to match with the size (in pixels) of the longest string in the array collection. The problem I'm facing is: the font width of the strings in the collection are different e.g. 'W' looks wider than 'l'. So I estimated the width of a character to be 8 pixels but that's not pretty neat. If a string that has many 'W' and 'M' is encountered the estimation is wrong. So I want precise pixel width of strings. How can i get the exact length of a string in pixels?? My solution that estimates all character to be 8 pixels wide is given below: public function populateDropDownList():void{ var array:Array = new Array("o","two","three four five six seven eight wwww"); var sampleArrayCollection:ArrayCollection = new ArrayCollection(array); var customDropDownList:DropDownList = new DropDownList(); customDropDownList.dataProvider=sampleArrayCollection; customDropDownList.prompt="Select ..."; customDropDownList.labelField="Answer Options:"; //calculating the max width var componentWidth=10; //default for each(var answerText in array){ Alert.show("Txt size: "+ answerText.length + " pixels: " + answerText.length*9); if(answerText.length * 8 > componentWidth){ componentWidth=answerText.length * 8; } } customDropDownList.width=componentWidth; answers.addChild(customDropDownList); } Any idea or solution is highly valued. Thanks

    Read the article

  • Is method reference caching a good idea in Java 8?

    - by gexicide
    Consider I have code like the following: class Foo { Y func(X x) {...} void doSomethingWithAFunc(Function<X,Y> f){...} void hotFunction(){ doSomethingWithAFunc(this::func); } } Consider that hotFunction is called very often. Would it then be advisable to cache this::func, maybe like this: class Foo { Function<X,Y> f = this::func; ... void hotFunction(){ doSomethingWithAFunc(f); } } As far as my understanding of java method references goes, the Virtual Machine creates an object of an anonymous class when a method reference is used. Thus, caching the reference would create that object only once while the first approach creates it on each function call. Is this correct? Should method references that appear at hot positions in the code be cached or is the VM able to optimize this and make the caching superfluous? Is there a general best practice about this or is this highly VM-implemenation specific whether such caching is of any use?

    Read the article

  • Automatically Persisting a Complex Java Object

    - by VeeArr
    For a project I am working on, I need to persist a number of POJOs to a database. The POJOs class definitions are sometimes highly nested, but they should flatten okay, as the nesting is tree-like and contains no cycles (and the base elements are eventually primitives/Strings). It is preferred that the solution used create one table per data type and that the tables will have one field per primitive member in the POJO. Subclassing and similar problems are not issues for this particular project. Does anybody know of any existing solutions that can: Automatically generate a CREATE TABLE definition from the class definition Automatically generate a query to persist an object to the database, given an instance of the object Automatically generate a query to retrieve an object from the database and return it as a POJO, given a key. Solutions that can do this with minimum modifications/annotions to the class files and minimum external configuration are preferred. Example: Java classes //Class to be persisted class TypeA { String guid; long timestamp; TypeB data1; TypeC data2; } class TypeB { int id; int someData; } class TypeC { int id; int otherData; } Could map to CREATE TABLE TypeA ( guid CHAR(255), timestamp BIGINT, data1_id INT, data1_someData INT, data2_id INt, data2_otherData INT ); Or something similar.

    Read the article

  • Rebuilding indexes does not change the fragmentation % for nonclustered indexes.

    - by Noddy
    For starters, I am no DBA and I am working on rebuilding the indexes. I made use of the amazing TSQL script from msdn to alter index based onthe fragmente percent returned by dm_db_index_physical_stats and if the fragment percent is more than 30 then do a REBUILD or do a REORGANISE. What I found out was, in the first iteration, there were 87 records which needed defrag.I ran the script and all the 87 indexes (clustered & nonclustered) were rebuilt or reindexed. When I got the stats from dm_db_index_physical_stats , there were still 27 records which needed defrag and all of theses were NON CLUSTERED Indexes. All the Clustered indexes were fixed. No matter how many times I run the script to defrag these records, I still have the same indexes to be defraged and most of them with the same fragmentation %. Nothing seems to change after this. Note: I did not perform any inserts/ updates/ deletes to the tables during these iterations. Still the Rebuild/reorganise did not result in any change. More information: Using SQL 2008 Script as available in msdn http://msdn.microsoft.com/en-us/library/ms188917.aspx Could you please explain why these 27 records of non clustered indexes are not being changed/ modified ? Any help on this would be highly appreciated. Nod

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

  • Understanding vhosts settings

    - by Matt
    Ok so i have a server and I want to put a few applications on and i am having vhost configuration problems. Here is what i have and I want some direction on what i am doing wrong...ok so the first file is /etc/apache2/ports.conf NameVirtualHost 184.106.111.142:80 Listen 80 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> then i have /etc/apache2/sites-available/somesite.com <VirtualHost 184.106.111.142:80> ServerAdmin [email protected] ServerName somesite.com ServerAlias www.somesite.com DocumentRoot /srv/www/somesite.com/ ErrorLog /srv/www/somesite.com/logs/error.log CustomLog /srv/www/somesite.com/logs/access.log combined <Directory "/srv/www/somesite.com/"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> when i visit somesite.com everything works great but when i add another vhost and lets say thats named anothersite.com. So i have /etc/apache2/sites-available/anothersite.com <VirtualHost 184.106.111.142:80> ServerAdmin [email protected] ServerName anothersite.com ServerAlias www.anothersite.com DocumentRoot /srv/www/anothersite.com/ ErrorLog /srv/www/anothersite.com/logs/error.log CustomLog /srv/www/anothersite.com/logs/access.log combined <Directory "/srv/www/anothersite.com/"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> then i run the following commands >> sudo a2ensite anothersite.com Enabling site anothersite.com. Run '/etc/init.d/apache2 reload' to activate new configuration! >> /etc/init.d/apache2 reload * Reloading web server config apache2 ...done. but when i visit anothersite.com or somesite.com they are both down..What is going on with the vhosts. Could it be the NameVirtualHost declaration with the ip or something...maybe my understanding of vhost settings is not clear. What i dont understand is why do both site now all the sudden not work at all.I would highly appreciate the clarity By the way anothersite.com or somesite.com are the only things I changed to make it more readable

    Read the article

  • Flash was "not designed to function across LANs". Any workarounds?

    - by Triynko
    See: http://helpx.adobe.com/flash/kb/problems-using-flash-authoring-across.html Issue When using Adobe Flash across a local area network (LAN) and networked drives/folders, you may experience any of the following problems:" Flash crashes while performing a test movie on FLA files located on a networked drive or folder. FLA files get corrupted when opening from or saving to networked drives or folder. Flash does not reflect changes in custom class after compiling. Flash, Flash Video Encoder, or Adobe Media Encodercrashes or corrupts Flash Video (FLV) files while encoding source located on networked drives or folder. Flash Video Encoder or Adobe Media Encoder crashes or corrupts FLV files where the output folder is a networked drive or folder. Published Flash Player (SWF) files and projectors are unable to load content located on networked drives or folder. More than one instance of a SWF or Projector on client machines cannot play back FLV files located on a networked drive or folder. Reason The Adobe Flash IDE, FLV Encoder, Adobe Media Encoderand Flash Player were not designed to function across LANs. Solution Use of Flash files across local networks is not supported in any context. Published content should access data through a web server. All file sources should be opened and saved on the local system. Using Flash in such a scenario for project collaboration or content deployment is highly discouraged and may corrupt your source files. If you need to work in a collaborative environment or store source files on a server, use the project panel and/or a third-party version control system. SERIOUSLY? I cannot work on files located on a mapped network drive? How did they mess that one up? Does the Flash IDE really open the source file and wipe it clean to do the saving, rather than saving a copy first then replacing it as an atomic file system operation? How hard would it be for them make a dummy temporary file for saving then issue a MOVE command? Any workarounds for this, like something that can make a network drive as stable as a local drive, like some kind of automatic local caching and synching?

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • Lock ups, crashing, transferring files using TrueCrypt with iSCSI

    - by Anthony
    I have looked into this error and it seems that it hasn't been discussed yet - or at least I can't find any information relating. I'm having issues transferring files, usually larger files over a couple of hundred MB. Here is the setup: QNAP 410 as iSCSI Target with multiple LUNs. (CRC is turned on (Data Digest and Header Digest) Server 2003 with iSCSI Initiator version 2.08 - build 3825 (I'm copying files from anothe machine to shares on Server 2003 = into TrueCrypt volume ergo onto the NAS) I have mounted the LUN and formatted it with TrueCrypt using NTFS (Full format, not a quick one). What happens is some files, mainly RAR/Compressed files, appear as if they copy but fail. I've tested this in a number of ways and can repeat the process every time. So I thought to check transfer over iSCSI without TrueCrypt in between, a plain NTFS format - no problem at all. So it would seem TrueCrypt is at least part of the problem here. I haven't tried copying directly from the server yet, I will try that. I also haven't tried it without CRC but fail to see how that would affect this. I will update with my findings later. In the meantime does anyone have any ideas as to what could be wrong? Thanks for your time. Update: I copied a set of files, the ones I was having issues with, to the server then from there I copied those into two places within the TrueCrypt volume (Mounted on the NAS). A seperate directory create in the root of the volume The same initial directory I was using in the first instance Both worked fine. So it now seems clear that this is a link between TrueCrypt, iSCSI and Windows Shares. I say this because I originally setup the whole system using TrueCrypt volume files, not iSCSI. I changed it as it didn't suit my requirements - day wasted as well. While I had this setup though I copied my entire file set to the volume files and all files copied without error - over the network, from a pc, to the server where TrueCrypt had the volume files mounted from the NAS. I didn't bother turning off CRC on the iSCSI system as I highly doubt that is the cause in light of this finding. So any ideas?

    Read the article

  • Proxy / Squid 2.7 / Debian Wheezy 6.7 / lots of TCP Timed-out

    - by Maroon Ibrahim
    i'm facing a lot of TCP timed-out on a busy cache server and here below my sysctl.conf configuration as well as an output of "netstat -st" Kernel 3.2.0-4-amd64 #1 SMP Debian 3.2.57-3 x86_64 GNU/Linux Any advice or help would be highly appreciated #################### Sysctl.conf cat /etc/sysctl.conf net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 65536 net.ipv4.tcp_low_latency = 1 net.core.wmem_max = 8388608 net.core.rmem_max = 8388608 net.ipv4.ip_local_port_range = 1024 65000 fs.aio-max-nr = 131072 net.ipv4.tcp_fin_timeout = 10 net.ipv4.tcp_keepalive_time = 60 net.ipv4.tcp_keepalive_intvl = 10 net.ipv4.tcp_keepalive_probes = 3 kernel.threads-max = 131072 kernel.msgmax = 32768 kernel.msgmni = 64 kernel.msgmnb = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.ip_forward = 1 net.ipv4.tcp_timestamps = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_syncookies = 1 net.ipv4.ip_dynaddr = 1 vm.swappiness = 0 vm.drop_caches = 3 net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_ecn = 0 net.ipv4.tcp_max_orphans = 131072 net.ipv4.tcp_orphan_retries = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_max_syn_backlog = 32768 net.core.netdev_max_backlog = 131072 net.ipv4.tcp_mem = 6085248 16227328 67108864 net.ipv4.tcp_wmem = 4096 131072 33554432 net.ipv4.tcp_rmem = 4096 174760 33554432 net.core.rmem_default = 33554432 net.core.rmem_max = 33554432 net.core.wmem_default = 33554432 net.core.wmem_max = 33554432 net.core.somaxconn = 10000 # ################ Netstat results /# netstat -st IcmpMsg: InType0: 2 InType3: 233754 InType8: 56251 InType11: 23192 OutType0: 56251 OutType3: 437 OutType8: 4 Tcp: 20680741 active connections openings 63642431 passive connection openings 1126690 failed connection attempts 2093143 connection resets received 13059 connections established 2649651696 segments received 2195445642 segments send out 183401499 segments retransmited 38299 bad segments received. 14648899 resets sent UdpLite: TcpExt: 507 SYN cookies sent 178 SYN cookies received 1376771 invalid SYN cookies received 1014577 resets received for embryonic SYN_RECV sockets 4530970 packets pruned from receive queue because of socket buffer overrun 7233 packets pruned from receive queue 688 packets dropped from out-of-order queue because of socket buffer overrun 12445 ICMP packets dropped because they were out-of-window 446 ICMP packets dropped because socket was locked 33812202 TCP sockets finished time wait in fast timer 622 TCP sockets finished time wait in slow timer 573656 packets rejects in established connections because of timestamp 133357718 delayed acks sent 23593 delayed acks further delayed because of locked socket Quick ack mode was activated 21288857 times 839 times the listen queue of a socket overflowed 839 SYNs to LISTEN sockets dropped 41 packets directly queued to recvmsg prequeue. 79166 bytes directly in process context from backlog 24 bytes directly received in process context from prequeue 2713742130 packet headers predicted 84 packets header predicted and directly queued to user 1925423249 acknowledgments not containing data payload received 877898013 predicted acknowledgments 16449673 times recovered from packet loss due to fast retransmit 17687820 times recovered from packet loss by selective acknowledgements 5047 bad SACK blocks received Detected reordering 11 times using FACK Detected reordering 1778091 times using SACK Detected reordering 97955 times using reno fast retransmit Detected reordering 280414 times using time stamp 839369 congestion windows fully recovered without slow start 4173098 congestion windows partially recovered using Hoe heuristic 305254 congestion windows recovered without slow start by DSACK 933682 congestion windows recovered without slow start after partial ack 77828 TCP data loss events TCPLostRetransmit: 5066 2618430 timeouts after reno fast retransmit 2927294 timeouts after SACK recovery 3059394 timeouts in loss state 75953830 fast retransmits 11929429 forward retransmits 51963833 retransmits in slow start 19418337 other TCP timeouts 2330398 classic Reno fast retransmits failed 2177787 SACK retransmits failed 742371590 packets collapsed in receive queue due to low socket buffer 13595689 DSACKs sent for old packets 50523 DSACKs sent for out of order packets 4658236 DSACKs received 175441 DSACKs for out of order packets received 880664 connections reset due to unexpected data 346356 connections reset due to early user close 2364841 connections aborted due to timeout TCPSACKDiscard: 1590 TCPDSACKIgnoredOld: 241849 TCPDSACKIgnoredNoUndo: 1636687 TCPSpuriousRTOs: 766073 TCPSackShifted: 74562088 TCPSackMerged: 169015212 TCPSackShiftFallback: 78391303 TCPBacklogDrop: 29 TCPReqQFullDoCookies: 507 TCPChallengeACK: 424921 TCPSYNChallenge: 170388 IpExt: InBcastPkts: 351510 InOctets: -609466797 OutOctets: -1057794685 InBcastOctets: 75631402 #

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Increase Timeout for remote sessions in Debian 5 Lenny

    - by Ash
    I always get a remote connection time out when using PuTTy and also when i send emails with attachments from a mail sever installed on Debian. I always get this error. I'm not sure if this is firewall or the new Debian 5 installation which i made. Is there any settings i need to fix after fresh install. Any inputs are highly appreciated. This error is pulling my brains out. Thanks. Error: 2011-01-10 15:21:13,454 INFO [btpool0-23://69.19.19.89/service/upload?fmt=extended] [[email protected];mid=72;ip=10.10.01.78;ua=Mozilla/5.0 (Windows;; U;; Windows NT 5.2;; en-US;; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 (.NET CLR 3.5.30729);] FileUploadServlet - File upload failed org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. timeout at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:367) at org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:126) at com.zimbra.cs.service.FileUploadServlet.handleMultipartUpload(FileUploadServlet.java:430) at com.zimbra.cs.service.FileUploadServlet.doPost(FileUploadServlet.java:412) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at com.zimbra.cs.servlet.ZimbraServlet.service(ZimbraServlet.java:181) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.zimbra.cs.servlet.SetHeaderFilter.doFilter(SetHeaderFilter.java:79) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:81) at org.mortbay.servlet.GzipFilter.doFilter(GzipFilter.java:132) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:218) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.rewrite.RewriteHandler.handle(RewriteHandler.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.DebugHandler.handle(DebugHandler.java:77) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:543) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:939) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:405) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:413) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:451) Caused by: org.mortbay.jetty.EofException: timeout at org.mortbay.jetty.HttpParser$Input.blockForContent(HttpParser.java:1172) at org.mortbay.jetty.HttpParser$Input.read(HttpParser.java:1122) at org.apache.commons.fileupload.MultipartStream$ItemInputStream.makeAvailable(MultipartStream.java:977) at org.apache.commons.fileupload.MultipartStream$ItemInputStream.read(MultipartStream.java:887) at java.io.InputStream.read(InputStream.java:85) at org.apache.commons.fileupload.util.Streams.copy(Streams.java:94) at org.apache.commons.fileupload.util.Streams.copy(Streams.java:64) at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:362) ... 33 more

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Configured MySQL for SSL , but SLL is still not in use..!

    - by Sunrays
    I configured SSL for MySQL using the following script. #!/bin/bash # mkdir -p /root/abc/ssl_certs cd /root/abc/ssl_certs # echo "--> 1. Create CA cert, private key" openssl genrsa 2048 > ca-key.pem echo "--> 2. Create CA cert, certificate" openssl req -new -x509 -nodes -days 1000 -key ca-key.pem > ca-cert.pem echo "--> 3. Create Server certificate, key" openssl req -newkey rsa:2048 -days 1000 -nodes -keyout server-key.pem > server-req.pem echo "--> 4. Create Server certificate, cert" openssl x509 -req -in server-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > server-cert.pem echo "" echo echo "" echo "--> 5. Create client certificate, key. Use DIFFERENT common name then server!!!!" echo "" openssl req -newkey rsa:2048 -days 1000 -nodes -keyout client-key.pem > client-req.pem echo "6. Create client certificate, cert" openssl x509 -req -in client-req.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 > client-cert.pem exit 0 The following files were created: ca-key.pem ca-cert.pem server-req.pem server-key.pem server-cert.pem client-req.pem client-key.pem client-cert.pem Then I combined server-cert.pem and client-cert.pem into ca.pem (I read in a post to do so..) I created a ssl user in MySQL: GRANT ALL ON *.* to sslsuer@hostname IDENTIFIED BY 'pwd' REQUIRE SSL; Next I added the following in my.cnf [mysqld] ssl-ca = /root/abc/ssl_certs/ca.pem ssl-cert = /root/abc/ssl_certs/server-cert.pem ssl-key = /root/abc/ssl_certs/server-key.pem After restarting the server,I connected to mysql but SSL was still not in use :( mysql -u ssluser -p SSL: Not in use Even the have_ssl parameter was still showing disabled.. :( mysql> show variables like '%ssl%'; +---------------+---------------------------------------------+ | Variable_name | Value | +---------------+---------------------------------------------+ | have_openssl | DISABLED | | have_ssl | DISABLED | | ssl_ca | /root/abc/ssl_certs/ca.pem | | ssl_capath | | | ssl_cert | /root/abc/ssl_certs/server-cert.pem | | ssl_cipher | | | ssl_key | /root/abc/ssl_certs/server-key.pem | +---------------+---------------------------------------------+ Have I missed any step, or whats wrong.. Answers with missed steps in detail will be highly appreciated..

    Read the article

  • Problem deploying GWT application on apache and tomcat using mod_jk

    - by Colin
    I'm trying to deploy a GWT app on Apache using mod_jk connector. I have compiled the application and tested it on tomcat on the address localhost:8080/loginapp and it works ok. However when I deploy it to apache using mod_jk I get the starter page which gives me a login form but trying to login I get this error 404 Not Found Not Found The requested URL /loginapp/loginapp/login was not found on this server Looking at the apache log files i see this [Thu Jan 13 13:43:17 2011] [error] [client 127.0.0.1] client denied by server configuration: /usr/local/tomcat/webapps/loginapp/WEB-INF/ [Thu Jan 13 13:43:26 2011] [error] [client 127.0.0.1] File does not exist: /usr/local/tomcat/webapps/loginapp/loginapp/login, referer: http://localhost/loginapp/LoginApp.html The mod_jk configurations on my apache2.conf file are as follows LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat "%w %V %T" <IfModule mod_jk.c> Alias /loginapp "/usr/local/tomcat/webapps/loginapp/" <Directory "/usr/local/tomcat/webapps/loginapp/"> Options Indexes +FollowSymLinks AllowOverride None Allow from all </Directory> <Location /*/WEB-INF/*> AllowOverride None deny from all </Location> JkMount /loginapp/*.html loginapp My workers.properties file is as follows workers.tomcat_home=/usr/local/tomcat workers.java_home=/usr/lib/jvm/java-6-sun ps=/ worker.list=loginapp worker.loginapp.type=ajp13 worker.loginapp.host=localhost worker.loginapp.port=8009 worker.loginapp.cachesize=10 worker.loginapp.cache_timeout=600 worker.loginapp.socket_keepalive=1 worker.loginapp.recycle_timeout=300 worker.loginapp.lbfactor=1 And this is my servlet mappings for my app on the application's web.xml <servlet> <servlet-name>loginServlet</servlet-name> <servlet-class>com.example.loginapp.server.LoginServiceImpl</servlet-class> </servlet> <servlet-mapping> <servlet-name>loginServlet</servlet-name> <url-pattern>/loginapp/login</url-pattern> </servlet-mapping> <servlet> <servlet-name>myAppServlet</servlet-name> <servlet-class>com.example.loginapp.server.MyAppServiceImpl</servlet-class> </servlet> <servlet-mapping> <servlet-name>myAppServlet</servlet-name> <url-pattern>/loginapp/mapdata</url-pattern> </servlet-mapping> Ive tried everything and it seems to still elude me. Even tried changing the deny from all directive on the WEBINF folder to allow from all and still it doesnt work. Maybe im missing something. Any help will be highly appreciated.

    Read the article

  • load-causing processes disappearing from "top" ps -o pcpu shows bogus numbers

    - by Alec Matusis
    I administer a large number of servers, and I have this problem only with Ubuntu 10.04 LTS: I run a server under normal load (say load average 3.0 on an 8-core server). The "top" command shows processes taking certain % of CPU that cause this load average: say PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11008 mysql 20 0 25.9g 22g 5496 S 67 76.0 643539:38 mysqld ps -o pcpu,pid -p11008 %CPU PID 53.1 11008 , everything is consistent. The all of the sudden, the process causing the load average disappears from "top", but the process continues to run normally (albeit with a slight performance decrease), and the system load average becomes somewhat higher. The output of ps -o pcpu becomes bogus: # ps -o pcpu,pid -p11008 %CPU PID 317910278 1587 This happened to at least 5 different severs (different brand new IBM System X hardware), each running different software: one httpd 2.2, one mysqld 5.1, and one Twisted Python TCP servers. Each time the kernel was between 2.6.32-32-server and 2.6.32-40-server. I updated some machines to 2.6.32-41-server, and it has not happened on those yet, but the bug is rare (once every 60 days or so). This is from an affected machine: top - 10:39:06 up 73 days, 17:57, 3 users, load average: 6.62, 5.60, 5.34 Tasks: 207 total, 2 running, 205 sleeping, 0 stopped, 0 zombie Cpu(s): 11.4%us, 18.0%sy, 0.0%ni, 66.3%id, 4.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 74341464k total, 71985004k used, 2356460k free, 236456k buffers Swap: 3906552k total, 328k used, 3906224k free, 24838212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 805 root 20 0 0 0 0 S 3 0.0 1493:09 fct0-worker 982 root 20 0 0 0 0 S 1 0.0 111:35.05 fioa-data-groom 914 root 20 0 0 0 0 S 0 0.0 884:42.71 fct1-worker 1068 root 20 0 19364 1496 1060 R 0 0.0 0:00.02 top Nothing causing high load is showing on top, but I have two highly loaded mysqld instances on it, that suddenly show crazy %CPU: #ps -o pcpu,pid,cmd -p1587 %CPU PID CMD 317713124 1587 /nail/encap/mysql-5.1.60/libexec/mysqld and #ps -o pcpu,pid,cmd -p1624 %CPU PID CMD 2802 1624 /nail/encap/mysql-5.1.60/libexec/mysqld Here are the numbers from # cat /proc/1587/stat 1587 (mysqld) S 1212 1088 1088 0 -1 4202752 14307313 0 162 0 85773299069 4611685932654088833 0 0 20 0 52 0 3549 27255418880 5483524 18446744073709551615 4194304 11111617 140733749236976 140733749235984 8858659 0 552967 4102 26345 18446744073709551615 0 0 17 5 0 0 0 0 0 the 14th and 15th numbers according to man proc are supposed to be utime %lu Amount of time that this process has been scheduled in user mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). This includes guest time, guest_time (time spent running a virtual CPU, see below), so that applications that are not aware of the guest time field do not lose that time from their calculations. stime %lu Amount of time that this process has been scheduled in kernel mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). On a normal server, these numbers are advancing, every time I check the /proc/PID/stat. On a buggy server, these numbers are stuck at a ridiculously high value like 4611685932654088833, and it's not changing. Has anyone encountered this bug?

    Read the article

  • High load on X3220 Quad Core Linux Apache server

    - by John Templar
    I'm seriously in need of help. My sites are now nearly impossible to use because of massive loads on my server. I'm already a month late on my mortgage and this really isn't helping my situation. I've been working on fixing this intermittent load problem for months (never this bad). I'm suspecting some kind of attack since I'm under DDOS attack a lot! I've been trying to figure out what is causing the load but I'm afraid I just don't have the experience or knowledge to understand all the data I've been looking at. I don't even know where to begin or how to test for the large array of attacks out there. Here's some data you might find useful... Server: Xeon X3220 Quad Core 2.4 GHz - Linux, FreeBSD 500 GB HD and 8 Gig of Ram. Runs Centos release 5.7 Server Version: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_qos/9.74 Warning: All sites are softcore adult sites - mostly fantasy art like elves and amazons. 1) Sites may run fine for weeks or just days at less than 10 load then start jumping to 40-80 load - no idea why. Same sites, same mods, same amount of traffic - just WHAM! 2) I get an email almost every day that says: "Large Number of Failed Login Attempts from IP (different each time)". My webhost (who almost never helps me) told me it was a udp flood or something. 3) I've changed the port for MySQL from the default. If I ever put it back to the default - I get Loads of over 100 from what must be a constant mysql port flood. 4) I've reconfigured MYSQL. Link: http://www.deadlyamazons.com/logs/mycnf.txt 5) I have 3 Joomla Jomsocial networks. I've spent a couple weeks turning all the mods/plugins off, waiting a day and then turning them back on the next day or later if there isn't any change (there hasn't been). For example, on Thursday I'll turn off videos, on Friday I'll turn off chat.. etc and nothing changes the load appreciably. 6) Joomla info: All SEF turned off - sh404sef completely disabled and removed. Components: Joomla 1.5.22, Jomsocial 2.0.5, Kunena 1/31/2011, HWDMediashare 11/22/2010 and JBolo Chat 2.7.3, Comet Chat or Envolve Chat. Page Compression is on, Cache is on 15 mins. Please click on this forum to see links to all my reports: http://forum.joomla.org/viewtopic.php?f=433&t=706035&p=2777500#p2777500 Any help would be highly appreciated.

    Read the article

  • Use an audio/video file from a Linux laptop via USB to be played by Magic Sing ET-23H

    - by AisIceEyes
    I am one of the technical directors of a regular karaoke contest event. For the karaoke contest itself, due to tight budget, we are using what one of the sponsors are providing - Magic Sing ET-23H . The video output of the Magic Sing ET-23H are broadcasted at two big screens that are being shown to the audience and event attendees. When a karaoke contestant provides his / her karaoke video, the video itself is in a readable USB flashdrive and is attached to the USB input of Magic Sing ET-23H. What really bugs me is that the interface of Magic Sing ET-23H are also being broadcasted at the big screen video feeds. The interface of choosing the video file is being seen in the Magic Sing ET-23H - also to the big video screens that are seen by the audience and event goers. I will post in the comments ( if my less than 10 reputation would allow me) the picture of Magic Sing ET-23KH USB input of the device. I always bring my laptop, Acer AS5742-7653, during the regular karaoke event. I'm using my laptop also for tallying of scores from the judges, and also playing audio files from contestants that did not provide a karaoke video. I personally am using different Linux distros, but I next to all the time use my Ubuntu Studio 12.04.3 64bit partition during the regular karaoke contest event. My question is this: Is there a way I can share a temporary video/audio file directly from the laptop I'm using, going to the Magic Sing ET-23H that can broadcast both the video/audio file? Just like how in Window's Avisynth AVS files, or VirtualDub's temporary avi file, or like using ffplay (of ffmpeg), etc. I have researched somewhat the matter and found links in SuperUser.com. Though I can only provide the links at the comments section of this post if my reputation of less than 10 would allow me. I have a hunch it is possible, but I have not fully understood the device being used at the event, Magic Sing ET-23H, if there are other ways for it to broadcast video and audio files besides its USB input. Any help to my current predicament is highly appreciated. Thank you. PS: Since I need at least 10 reputation to post more than 2 links and also post images, I will try to post the image & links at the comments (if my below 10 reputation would allow me).

    Read the article

  • OpenVPN multiple servers on the same subnet, high availability

    - by andre
    Hey everyone. Let me start by saying that my Linux experience isn't super awesome but I can usually find my way around things easily. Over at work we have an OpenVPN setup that's been due for some improvement for a while now. The main server (tap mode) runs in our office, behind a rather slow DSL connection. The main problem is that, since I'm usually out of the office, every time I want to access something on the virtual network I have to go through that server to get anywhere else. We have two servers up on 100 Mbit connections that we use for development and production purposes, about 3 more servers in the office (one of them behind a different T1 line for VOIP) and about two dozen clients who use the network on a daily basis from various locations. We've had situations where network routing (outside of our control) would not allow people to reach our main OpenVPN server whilst the other locations were connectable. Also any time someone outside the office wants to fetch something from any of the servers (say, a 500 MB code repository), a whopping 20 KB/s download speed is just unacceptable these days (did I mention slow DSL? ok). We had to implement traffic shaping on this server since maxing out this connection was fairly trivial. I had the thought of running two (or more) OpenVPN servers in the network. These would have to have the same subnet though, as our application relies on virtual network's IP addresses for some of its core functionality. The clients would also preferably retain the same IP addresses but that's not vital. For simplicity, lets call the current server office and the second server I'm setting up, cloud. Call the server on the T1 phone. This proved to be rather complex because as soon as I connect to cloud, I cannot see office. Any routes to a server that would go through office also do not work while I'm connected to cloud (no ping, nothing) and vice-versa. There's no rules for iptables that would be blocking the traffic either. Recently I came across this article on linuxjournal but the solution they provide seems to only cover the use of two servers and somewhat outdated (can't even find much documentation, their wiki is offline). They also state that adding more servers would be a complex task. Ideally I would like to keep the existing server office running the virtual network and also run the OpenVPN daemon on the cloud and phone servers (100 Mbit and very reliable connection, respectively) so that we're on safe ground in case of a hardware failure, DSL failure, etc. So, in essence, I'm looking for a highly available OpenVPN solution (fix, patch, hack, tweak, whatever you want to call it) that will accept connections on multiple hosts (2 or more) whilst keeping the same IP address subnet regardless of the server to which you connect to. Thanks for reading and sorry for the long post, I hope it gets the point across :P

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • Trouble creating FTP in Server 2008

    - by Saariko
    I have been trying to create an FTP server on my new Server 2008. I have been following both (very detailed and highly published here guides) For setting up using IIS Manager http://learn.iis.net/page.aspx/321/configure-ftp-with-iis-7-manager-authentication/ and For anonymous FTP http://www.trainsignaltraining.com/windows-server-2008-ftp-iis7 I am able to log as an anonymous user. My need is to use a named user, so I need to use the IIS Manager. I get error 530 when trying to log as a user. Connected to 127.0.0.1. 220 Microsoft FTP Service User (127.0.0.1:(none)): ftpmanager 331 Password required for ftpmanager. Password: 530-User cannot log in. Win32 error: Logon failure: unknown user name or bad password. Error details: Filename: Error: 530 End Login failed. ftp> I can not learn from this message anything. My password is set to: 1234 (so I don't think I make a mistake here - testing purposes only ofc) Thank you. Note - I went over other posts on SE that I read, and couldn't get the result: IIS7 Windows Server 2008 FTP -> Response: 530 User cannot log in. FTP Error 530, User cannot log in, home directory inaccessible. Having trouble setting up FTP server on Windows Server 2008 EDIT I think I found some errors with the physical path. Going to Basic settings, and Test Connection on the physical path, gave me the following error: The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that \$ has Read access to the physical path. Then test these settings again. I am not sure which/whom should get access to the Root folder !? I want to point out, I managed to login with a domain user (change authorization and authentication methods) but this is NOT the requested solution. I checked to make sure that the FTP, folders, access is working properly. I am bit lost here. ==== More tries: I have enabled another Allow rule for ALL Users. I still get the same error. It seems that it doesn't matter if i use a correct or wrong password, I still get Error 530.

    Read the article

  • a couple of questions about proxy server,vpn & how they works

    - by Q8Y
    I have a couple of questions that are related to security. Correct me if i'm wrong :) If I want to request something (ex: visiting www.google.com): my computer will request that then it will to the ISP then to my ISP proxy server that will take the request and act as a middle man in this situation ask for the site (www.google.com) and retrieve it then the proxy will send it back to me. I know that its being done like that. So, my question is that, in this situation my ISP knows everything and what I did request, and the proxy server is set by default (when I ask for an internet subscription). So, if I use here another proxy (lets assume that is a highly anonymous and my ISP can't detect my IP address from it), would I visit my ISP and then from my ISP it will redirect me to the new proxy server that I provide? Will it know that there is someone using another proxy? Or will it go to another network rather than my ISP? Because I didn't get the view clearly. This question is related to the first one. When I use a VPN, I know that VPN provides for me a tunneling, encryption and much more features that a proxy can't. So my data is travelling securely and my ISP can't know what I'm doing. But my questions are: From where is the tunneling started? Does it start after I visit the ISP network (since they are the one that are responsible for forwarding my data and requests)? If so, then not all my connection is tunneled in this way, there is a part that is not being tunneled. Since, every time I need to do anything I have to go to my ISP and ask to do that. Correct me if I misunderstand this. I know that VPN can let my computer be virtually in another place and access its resources (ex: be like in my office while I'm in my home. This is done via VPN). If I use a VPN service provider so that I can access the internet securely and without being monitored by my ISP. In this case, where is my encrypted data saved? Is it saved in my ISP or in the VPN service provider? If I use a VPN, does anyone on the internet know what I'm doing or who I am? Even the VPN service provider? Can they know me? I think they should know the person that is asking for this VPN service, am I right?

    Read the article

  • Reviews Cheyney Group Marketing: What accounting softwares are available in the market for small businesses?

    - by user225556
    Accounting is the language of business, and good accounting software can save you hundreds of hours at the business equivalent of Berlitz. There's no substitute for an accounting pro who knows the ins and outs of tax law, but today's desktop packages can help you with everything from routine bookkeeping to payroll, taxes, and planning. Each package also produces files that you can hand off to an accountant as needed. Small-business managers have more accounting software options than ever, including subscription Web-based options that don't require their users to install or update software. Many businesses, however--including those that need to track large inventories or client databases, and those that prefer not to entrust their data to the cloud--may be happier with a desktop tool. We looked at three general-purpose, small-business accounting packages: Acclivity AccountEdgePro 2012 (both the product and the company were previously called MYOB), Intuit QuickBooks Premier 2012, and Sage's Sage 50 Complete 2013 (the successor to Peachtree Complete). All three packages offer a solid array of tools for tracking income and expenses, invoicing, managing payroll, and creating reports. These full-featured and highly mature programs don't come cheap. Acclivity AccountEdge Pro, at $299, is the least expensive; and prices climb if you opt to use common time-saving add-ons such as payroll services, or if you add licenses for multiple user accounts. All three are solid on the basics, but they have distinct differences in style and focus. The more you know about your accounting requirements, the more closely you'll want to look at the software you're thinking of buying. Sage 50 Complete should appeal most to people who understand the fine points of accounting and can use the product's many customization features (especially for businesses that manage inventory). QuickBooks works hard to appeal to newbies who need only the basics and might be intimidated by the level of detail and technical language exposed in the other two packages. At the same time, it also has a slew of third-party add-ons that meet specific needs and greatly expand its capabilities. AccountEdge Pro balances accessibility with a strong feature set at an affordable price. It's especially suitable for businesses that need to provide simultaneous access to multiple users.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >