Search Results

Search found 9250 results on 370 pages for 'weekend projects'.

Page 297/370 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Filezilla client unable to get directory listing from Filezilla Server (Windows)

    - by sestocker
    I've set up a self signed certificate in FileZilla server and enabled FTP over SSL/TPS. When I connect from the client FileZilla, I am able to authenticate but cannot get a directory listing: Status: Connecting to MY_SERVER_IP:21... Status: Connection established, waiting for welcome message... Response: 220-FileZilla Server version 0.9.39 beta Response: 220-written by Tim Kosse ([email protected]) Response: 220 Please visit http://sourceforge.net/projects/filezilla/ Command: AUTH TLS Response: 234 Using authentication type TLS Status: Initializing TLS... Status: Verifying certificate... Command: USER MYUSER Status: TLS/SSL connection established. Response: 331 Password required for MYUSER Command: PASS ******** Response: 230 Logged on Command: PBSZ 0 Response: 200 PBSZ=0 Command: PROT P Response: 200 Protection level set to P Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PORT 10,10,25,85,219,172 Response: 200 Port command successful Command: MLSD Response: 150 Opening data channel for directory list. Response: 425 Can't open data connection. Error: Failed to retrieve directory listing I have ports 21 and 50001 through 50005 open on the firewall. We are migrating servers - the 50001 - 50005 is one of the things that helped get FTPS working on the old server. I'm not sure this installation would use the same ports? What else could be the problem?

    Read the article

  • Help me please with this error

    - by Brandon
    I setup IIS. I moved my folder with all the files to the IIS directory. Now when I go to http://localhost/thefolder I get: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)] Luxand.FSDK.ActivateLibrary(String LicenseKey) +0 FaceRecognition._Default.Page_Load(Object sender, EventArgs e) in D:\Project Details\Layne Projects\DotNet Project\FaceRecognition\FaceRecognition\Default.aspx.cs:60 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42 System.Web.UI.Control.OnLoad(EventArgs e) +132 System.Web.UI.Control.LoadRecursive() +66 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428

    Read the article

  • Improving browser performance while using lots of tabs?

    - by Andrew
    My browsing habits cause me to open lots of windows and tabs, either related to different projects I'm working on or things I may want to read later. I use OSX and use about 5 spaces with multiple windows in each space. The problem is eventually I'll have around 200 or more tabs open (spread over 15-20 windows) that I don't want to close. Needless to say, my computer's performance starts to degrade. As I write this on my mobile, Safari on my laptop is locking up the computer. I used to use Chrome but found better performance with Safari. What I'd like to know, is there a graph of browser performance based on tab usage? I don't need a browser that keeps all tabs active. It would be great if the browser could increase performance by "putting tabs to sleep". Or if there was some sort of tool for saving a "workspace" of tabs that you could reactivate the next time you are working on that project. What sort of solution can you recommend to solve this problem?

    Read the article

  • Simulated NAT Traversal on Virtual Box

    - by Sumit Arora
    I have installed virtual box ( with Two virtual Adapters(NAT-type)) - Host (Ubuntu -10.10) - Guest-Opensuse-11.4 . Objective : Trying to simulate all four types of NAT as defined here : https://wiki.asterisk.org/wiki/display/TOP/NAT+Traversal+Testing Simulating the various kinds of NATs can be done using Linux iptables. In these examples, eth0 is the private network and eth1 is the public network. Full-cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source iptables -t nat -A PREROUTING -i eth0 -j DNAT --to-destination Restricted cone iptables -t nat POSTROUTING -o eth1 -p tcp -j SNAT --to-source iptables -t nat POSTROUTING -o eth1 -p udp -j SNAT --to-source iptables -t nat PREROUTING -i eth1 -p tcp -j DNAT --to-destination iptables -t nat PREROUTING -i eth1 -p udp -j DNAT --to-destination iptables -A INPUT -i eth1 -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p udp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p tcp -m state --state NEW -j DROP iptables -A INPUT -i eth1 -p udp -m state --state NEW -j DROP Port-restricted cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source Symmentric echo "1" /proc/sys/net/ipv4/ip_forward iptables --flush iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE --random iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT What I did : OpenSuse guest with Two Virtual adapters - eth0 and eth1 -- eth1 with address 10.0.3.15 /eth1:1 as 10.0.3.16 -- eth0 with address 10.0.2.15 now running stund(http://sourceforge.net/projects/stun/) client/server : Server eKimchi@linux-6j9k:~/sw/stun/stund ./server -v -h 10.0.3.15 -a 10.0.3.16 Client eKimchi@linux-6j9k:~/sw/stun/stund ./client -v 10.0.3.15 -i 10.0.2.15 On all Four Cases It is giving same results : test I = 1 test II = 1 test III = 1 test I(2) = 1 is nat = 0 mapped IP same = 1 hairpin = 1 preserver port = 1 Primary: Open Return value is 0x000001 Q-1 :Please let me know If any has ever done, It should behave like NAT as per description but nowhere it working as a NAT. Q-2: How NAT Implemented in Home routers (Usually Port Restricted), but those also pre-configured iptables rules and tuned Linux

    Read the article

  • Cannot open files in Visual Studio but in Delphi and Notepad

    - by Andrew J. Brehm
    About an hour ago Visual Studio 2008 decided that it cannot find files any more. This is on 64 bit Windows Vista. When I right-click on a text file (source code or otherwise) and select "open with" and "Visual Studio 2008", I get the following error (example): Windows cannot find 'C:\Users\ajbrehm\Documents\Visual Studio 2008\Projects\Hello Prism\Hello Prism\Main.pas'. Make sure you typed the name correctly, and then try again. When I right-click the same file and select "open with" and "Delphi 2010" or "Notepad" (both other options available for text files on my system), the file opens correctly. Oddly enough when the file is part of a Visual Studio project and I open the project itself with Visual Studio (this works), I can open the file from within Visual Studio. Any ideas what might be going on? This started about an hour after I made a complete backup of my Vista VM and after I installed IIS 7, SQL Express, and Sourcegear Vault. The first files I noticed couldn't be opened in Visual Studio any more where Pascal source files in checked-outed folders from Vault. And Vault also seems to be unable to see one of the sources files and claims they don't exist. I found out about Visual Studio not opening ANY files any more when I tried to recreate the file Vault refused to see. Update: I just checked. Another user, "administrator", can still open text files with Visual Studio 2008. Both users have administrator rights. Update: I just restored the hours-old backup. Same problem. Apparently whatever triggered this happened before the install of IIS 7 and SQL Express. Never noticed it before.

    Read the article

  • Probability of Blade Chassis Failure

    - by ChrisZZ
    In my organisation we are thinking about buying blade servers - instead of rack servers. Of course technology vendors also make them sound very nice. A concern, that I read very often in different forums, is, that there is a theoretical possibility of the server chassis going down - which would in consequence take all the blades down. That is due to shared infrastructure. My reaction on this probability would be to have redundancy and by two chassis instead of one (very costly of course). Some people (including e.g. HP Vendors) try to convince us, that the chassis is very very unlikely to fail, due to many redundancies (redundant power supply, etc.). Another concern on my side is, that if something goes down, spare parts might be required - which is difficult in our location (Ethiopia). So I would ask to experienced administrators, that have managed blade server: What is your experience? Do they go down as a whole - and what is the sensible shared infrastructure, that might fail? That question could be extended to shared storage. Again I would say, that we need two storage units instead of only one - and again the vendors say, that this things are so rock solid, that no failure is expected. Well - I can hardly believe, that such a critical infrastructure can be very reliable without redundancy - but maybe you can tell me, whether you have successfull blade-based projects, that work without redundancy in its core parts (chassis, storage...) At the moment, we look at HP - as IBM looks much to expensive... thanks a lot best regards Christian

    Read the article

  • Websockets Server with Fault-Tolerance and Durable Message Store

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for. EDIT So, after some research the following installed options seem to be the most robust: Kaazing Migratory Migratory (http://migratory.ro) Hosted services that seem "real" Pusher (great API but no history feature yet) PubNub (has history) All of the above services have graceful fallback to other communication methods if websockets are not available. I was not able to find any open source that provided "out of the box" clustering, fail-over, and a durable message store to play back history. There are some projects that may serve as good starting points, but not exactly what I am looking for.

    Read the article

  • Not able to run firefox in a head-less Ubuntu server 9.10

    - by Julio J.
    I need to run Firefox in my server in order to execute some Selenium tests from Hudson. I would love no to have to install a complete gui. So I installed Xvfb in order to fake the Gui (I understand it this way correct me if my assumptions are wrong). After some time trying to make it work, I'm stuck with the next situation: $ sudo Xvfb -ac :99 & [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) $ firefox [dix] Could not init font path element /var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType, removing from list! [config/dbus] couldn't register object path (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) (EE) config/hal: NewInputDeviceRequest failed (2) Xlib: extension "RANDR" missing on display ":99.0". GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: /bin/dbus-launch terminated abnormally without any error message) I'm runnig firefox without installing it from the repositories. And I'm getting a socket timeout when I try to run the selenium tests, so I guess the problem is in Firefox and Xvfb. I have installed already the nex package: i gconf-defaults-service - GNOME configuration database system (system defaults service) That in some forums suggest to be a fix, that in my case doesn't work. Any explanation about the problem and ways of solving it without installing a full gui, will be very helpful.

    Read the article

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

  • Nexus One WiFi connection problems.

    - by sunocky
    I have two new Nexus One for a research project. For the projects I need to keep a server running on the phone. But soon I found out that both the phones have inconsistent WiFi connection problems at my home. It can connect to my WiFi network, but will drop off in a random time. And in order to reconnect to my WiFi, I may need to reboot my router, or the phone will say "obtaining IP address" and then "Unsuccessful". I also own a G1 with firmware version 1.6, it has no such connection problems. Well, to my surprise, the two Nexus One works fine with connecting to the WiFi network at my work place, which is a WEP type WiFi connection. By the way, it is a WPA type connection at my home. Anyone knows what's the problems with the Nexus One? Any suggestions on what should I do if I want to keep the WiFi connection live all the time at my home? Thanks very much!

    Read the article

  • Grant HTTP access based on unix user group

    - by Sander Marechal
    Is it possible to grant network access or HTTP access based on a user's group? At my company we want to set up an internal composer server using Satis to manage packages for the projects we write (e.g. on repository.mycompany.com), with the packages themselves in our SVN server (svn.mycompany.com). We have several webservers with many different users on them. Some users should be able to reach the composer and SVN server. Some should not. Users that should be able to reach these servers all belong to the same group. How can I set up Apache on the Composer and SVN server to only grant access to those users in that group? Alternatively, can I set up the webservers in such a way that only users from that group are able to make a connection to our Composer and SVN servers? The best thing we have come up with so far is using SSL client certificates. We simply place a client certificate on all servers which can be used to access Composer and SVN. Only the right usergroup will have read access to the certificate. A bit clunky but it may work. But I'm looking for something better.

    Read the article

  • Trouble getting latest version of Git

    - by TheMethod
    I am using Ubuntu 10.04 LTS. I'm looking at using git as source control for personal projects and Github as a remote repository. I was having trouble pushing a commit to my remote github repo getting the following error message: The requested URL returned error: 403 while accessing https://github.com/Jstall/helloworld.git/info/refs When I did some digging I found that the problem could be me not having the latest version of Git. When I did a --version I found that I have version 1.7.0.4 locally. So I tried to update git using: sudo apt-get install git but get the following error: Reading package lists... Done Building dependency tree Reading state information... Done Package git is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package git has no installation candidate I've tried running: sudo apt-get update and trying again but it didn't seem to make a difference. I'm not sure if it's relevant but I'm also getting a couple of 404's when I run update: Err http://wine.budgetdedicated.com edgy/main Packages 404 Not Found Fetched 4,117B in 0s (5,142B/s) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/edgy/universe/binary-i386/Packages.gz 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://wine.budgetdedicated.com/apt/dists/edgy/main/binary-i386/Packages.gz 404 Not Found I'm not sure when I should try next. Could anyone suggest a course of action to get this resolved? Any advice would be appreciated. Thanks much!

    Read the article

  • Affordable combined Ruby/Rails/Redmine + Subversion hosting?

    - by Pekka
    I'm a self employed web developer and after nine years of hard work, I'm looking to become a bit more "vagrant" starting next year, do some much-needed traveling and a bit and work off and on, making use of one of the greatest advantages of a programming job: The ability to work virtually from everywhere. For that, I am looking for a reliable hosting company I can entrust my code to in the form of a number of Subversion repositories, and an installation of the Redmine project management tool. As my financial situation may vary during traveling, I am looking for something I can pay up front for a year or two, and is obviously not too pricey. I don't care where the company is located, as long as it's trustworthy and solid, meaning it's not likely to go out of business next month. Does anybody know good recommendations? Preferably from own, personal, good experience. I have looked at CVSDude / Codesion and while they are certainly great, they don't offer Redmine of course, and seem to be aiming toward bigger organizations mainly. What I would need: 2-5 Gigs of space minimum, freely distributable between SVN, and Redmine attachments Unlimited number of Subversion projects Access control (team members / checkout-only accounts / etc.) I don't mind configuring the svn settings on file basis myself I need the possibility to map a custom domain to the package that is hosted elsewhere Frequent backups and access to those backups through FTP or other means I have been running my own virtual server for this until now, but I don't want the hassle, especially on the security side, while I may not always have the internet connection to fix problems that may come up.

    Read the article

  • repo sync "CyanogenMod/android_prebuilt" size and resume capability.?

    - by james
    I'm downloading CyanogenMod-10.1 source on a low speed broadband. About 4GB of source is downloaded . In that 4GB, there is a big project "CyanogenMod/android_frameworks_base" which alone took 1GB of download without any interruption. Ok now, after 4GB of download, my internet got disconnected and I had to stop (ctrl + z) repo sync while it was downloading the project "CyanogenMod/android_prebuilt". Before I stopped repo sync the android_prebuilt got downloaded till 250MB and is at 42percent. I checked the working folder and there is a file "tmp_pack_df5CKb" of size 250MB in the path "$WORKING_DIR/.repo/projects/prebuilt.git/objects/pack/" . Then I restarted repo sync and it was downloading the android_prebuilt project. But I'm not sure if it was downloading from start or resuming from 250MB. While downloading this time , the previous "tmp_pack_df5CKb" isn't deleted and the content is being downloaded to a new file "tmp_pack_HPfvFG". I heard repo sync cannot be resumed for a project. But here, since the previous file isn't deleted I want to ask if android_prebuilt is resuming or downloading from start again? Now that my high speed internet is over (current speed 256kbps), I'm not sure if I can download the remaining ~4GB if single project is in size 500 MB.

    Read the article

  • Removing resource limits on Solaris 10

    - by mikeydonkey
    How should one remove all potential artificial resource limitations for a process? I just saw a case where a server application consumed resources so that some limitation was hit. The other shells into the same server etc were all extremely slow (waiting for something to free up for them; ie. prstat starting 5 minutes). It wasn't CPU/memory related problem so I think it has got something to do with ulimits / projects. Already managed to set the maximum open files to 500 000 and it helped a little bit. However there is something else and I can not figure out what resource is maxed out. I can get some in-house administrator probably to check this but I would like to understand how I could make sure there shouldn't be any limitations! If you think I am going the wrong way (would be better to figure out what limitation should be specfically tuned etc) please feel free to point me to the correct way. I know technical stuff - it's just Solaris 10 that is giving me headache :/

    Read the article

  • Multi-WAN bonding across different media

    - by Tom O'Connor
    I've recently been thinking again about a product that Viprinet provide, basically they've got a pair of routers, one that lives in a datacentre, Their VPN Multichannel Hub and the on-site hardware, their VPN multichannel routers They've also got a bunch of interface cards (like HWICs) for 3G, UMTS, Ethernet, ADSL and ISDN adapters. Their main spiel seems to be bonding across different media. It's something that I'd really like to use for a couple of projects, but their pricing is really quite extreme, the hub is about 1-2k, the routers are 2-6k, and the interface modules are 200-600 each. So, what I'd like to know is, is it possible with a couple of stock Cisco routers, 28xx or 18xx series, to do something similar, and basically connect a bunch of different WAN ports, but have it all presented neatly as one channel back to the internet, with seamless (or nearly) failover if one of the WAN interfaces should fail. Basically, If i got 3x 3G to ethernet modems, and each on a different network, I'd like to be able to loadbalance/bond across all of them, without having to pay Viprinet for the privilege. Does anyone know how I'd go about configuring something for myself, based around standard protocols (or vendor specific ones), but without actually having to buy the Viprinet hardware?

    Read the article

  • My home box as my own host?

    - by Majid
    Hi all, I have a 512 kb/s DSL service at home. I do not have a static IP but I can get one if I pay some extra to my ISP. Now, if I get the static IP, can I make my home box act as my internet host? What else do I need? Thanks P.S. I know that if at all possible, the site I make available this way might be slow, that is alright, my question is if it is possible at all. Edit: I need this for very small traffic. I am a php developer and for my projects I am often asked to provide a demo. I currently use a free hosting for this purpose but it is down most of the time and support is non-existent. So I thought to set-up my home computer as my test server. With this please note that: I will only occasionally have 'visitors' and that will be one or possibly two visitors at any time. These demos, are to showcase functionality, so no big images are served and page views will normally generate under 100KB of traffic.

    Read the article

  • Blink build with Xcode failed

    - by Merci
    I found a GPL-ed SIP client for Mac, Blink. I'd like to build it from source since the binaries are only available as paid download. Just FYI i'm studying programming at university but have no experience in building complex application from source. After downloading the content of the repository i opened the Xcode project and tried to build on OS X 10.7, Xcode 4.2.1. Unfortunately the build fail with 1 error and many warnings Most of the warnings are like this: Attribute Unavailable: Custom Identifiers in Interface Builder versions prior to 3.2 The error message is: Apple Mach-O Linker (ld) Error Command /Developer/usr/bin/clang failed with exit code 1 preceded by the warning Apple Mach-O Linker (ld) Warning directory not found for option '-L/Users/Sergio/Downloads/Blink/devel.ag-projects.com/repositories/public/blink-cocoa/Distribution/Frameworks' I notice that in the list of required files i have this files missing: Dependencies/Frameworks libgcrypt.11.6.0.dylib libgcrypt.11.dylib libgnutls-extra.26.dylib libgnutls.26.dylib libgpg-error.0.dylib libintl.8.dylib liblzo.1.dylib libtasn1.3.dylib Dependencies/Resources lib Frameworks/Linked Frameworks Sparkle.framework Products Blink.app It should be possible to download these files somewhere. Unfortunately googling did not help. There's no documentation on the project site.

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • OpenOffice Calc: How can I count the number of different items with data pilot?

    - by manu
    Hi all, I have a rather long spreadsheet with historical information of issues solved by some user on a collaborative environment. The spreadsheet have the following (relevant) columns date, week no., project, author id, etc... The week no. is calculated from the date, is basically the year concatenated with the week number within that year; for instance, both 2009-02-18 and 2009-02-20 yield the week number 200908 - the 8th week of year 2009; and 2009-02-23 yields 200909 - the 9th week of year 2009. I need to count how many different users (given by author id) contributed to some project, on a weekly basis. I have setup a data pilot with the week as Row Field, the project as the Column Field, and count-author as the Data Field. However, this counts the author id as different instances. This is not what I need. I need to count how many different users contributed to each project on a weekly basis. I expect to get something like: projects week Project1 Project2 Project3 200901 10 2 200902 2 7 Each inner cell containing how many different users contributed. With the count-author configuration, what I get is how many contributions (total) got the project on that week. Is there a way to tell OpenOffice Calc to do what I want?

    Read the article

  • Server Clustering (Django, Apache, Nginx, Postgres)

    - by system-matrix
    I have a project deployed with django, Apache, Nginx and Postgres. The project has requirement of live data viewable to customers. The projects main points are: 1. Devices in field send data to server(devices are also like website users) after login. 2. There is background import process which imports the uploaded data in postgres. 3. The webusers of the system use this data and can send commands to the devices, which devices read when they login. 4. There are also background analysis routines running on the data. All the above mentioned setup and system is deployed on one amazon EC2 cloud machine. The project currently supports over 600 devices and 400 users. But as the number of devices are increasing with time the performance of the server is going down. We want to extend this project so that it can support more and more devices. My initial thinking is, We will create one more server like current one and divide the devices amongst these to servers. But Again We need a central user and device managment point though django admin. Any Ideas? What are the best possible ways to create a scalable architecture? How can I create a Postgres Cluster and Use it with Django, if possible?

    Read the article

  • SVN checkout returns 400 error

    - by eboix
    I'm trying to download the http://code.opencv.org/svn/opencv/trunk/ repository of all of the OpenCV source code - as specified in an OpenCV installation tutorial. In the tutorial, the repository https://code.ros.org/svn/opencv/trunk/ is used, but they moved it to http://code.opencv.org/svn/opencv/trunk/, and now you need a password to access the code.ros.org repository. Anyway, I'm using TortoiseSVN to download the SVN repository. (I get the same error with http://sourceforge.net/projects/win32svn/) I get this: Checkout from http://code.opencv.org/svn/opencv/trunk, revision HEAD, Fully recursive, Externals included Server sent unexpected return value (400 Bad request. Method Unknown) in response to REPORT request for '/svn/opencv/!svn/vcc/default' On the TortoiseSVN site I found something about this 400 error: You're behind a firewall which blocks DAV requests. Most firewalls do that. Either ask your Administrator to change the firewall, or access the repository with https:// instead of http:// like in https://svn.collab.net/repos/svn/ That way you connect to the repository with SSL encryption, which firewalls can't interfere with (if they don't block the SSL port completely). Also some virus scanners (i.e. Kapersky) are known to interfere and cause this error. The code.ros.org repository is https://, so I would be able to access it, but I need a password, so I can't. I made an account on ros.org, but it seems that I still need a password (which I don't know) to access the code repository. My username-password combination does not work. I unblocked all of the TortoiseSVN programs in my firewall settings. Nothing changed. I temporarily stopped my firewall to see if it was interfering with my request. I got the same error. How can I do an svn checkout http://code.opencv.org/svn/opencv/trunk/opencv/ so that I don't get this error? Is there any way to make it https://? Any help would be appreciated!

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • Time Machine vs Source Control?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >