Search Results

Search found 5963 results on 239 pages for 'rest wing'.

Page 204/239 | < Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >

  • Android SDK and Java

    - by Soonts
    Android SDK Manager complains "WARNING: Java not found in your path". Instead of using the information from Windows registry, the software tries to search Java in the default installation folders, and fails (I don't install software in program files because I don't like space characters in my paths). Of course I know how to modify the %PATH% environment variable. The question is — which Java does it need? After installing the latest JDK, I’ve got 4 distinct versions of java.exe file, in the following 4 folders: system32, jre6\bin, jdk1.6.0_26\bin, and jdk1.6.0_26\jre\bin. Size ranges from 145184 to 171808. All of them print version “1.6.0_26” when launched with the “-version” argument. The one in system32 has .exe version “6.0.250.6”, the rest of them is “6.0.260.3”. All 4 files are different (I’ve calculated the MD5 checksums). Q1. Which folder should I add to %PATH% to make the Android SDK happy? Q2. Why does Oracle build that many variants of java.exe of the same version for the same platform? Thanks in advance! P.S. I'm using Windows 7 SP1 x64 home premium, and downloaded the 64-bit version of JDK, jdk-6u26-windows-x64.exe.

    Read the article

  • Returning a Value from a jQuery Dialog Button Instead of Running a Function

    - by kamera
    I'm working on an admin UI and I can't wrap my head around how to get a specific kind of modal behaviour. I'm using jQuery. I already have a system for modal dialogs in place, where an openModal(settings) function opens a modal. Based on the settings, it will show/hide or sometimes inject various dialogs. I also have a generic confirmation dialog and this is the one I'm not sure how to make work. The layout is simple: <div id="confirmation-dialog"> <h2>Are you sure?</h2> <button class="button.yes">Yes</button> <button class="button.no">No</button> </div> I could just hard-code each button to do what I need it to do but I would like this to be more generic. Ideally, I would like it to function like this: (I know this doesn't work) function deleteImportantThing() { if (confirmThis() == true) { // Delete the thing } else { // Do nothing, close dialog } } function confirmThis() { openModal({kind: "confirm"}); // Check which button is clicked, then return true or false } I guess I just don't know enough JavaScript to get this to work, or to even figure out where to look. Any advice? I know there are modal dialog plug-ins out there but I'd have to change how the rest of my modals work to integrate them, and I'd like to do this myself if at all possible.

    Read the article

  • Issue creating side-by-side slideshows with jQuery

    - by JShweky
    I'm trying to create a site with 2 slideshows. I've tweaked and re-tweaked the JS and Jquery numerous times. Sometimes one slideshow works perfectly and the other cycles between one picture, other times both work but are out of sync, or the fadeIn doesn't seem to be applied to the second slideshow, or in some variations one slideshow stays frozen on the initial image and just remains static. Anyway, I created a JS Fiddle (link at bottom) and apparently my code is at least free of typos. JS is below, the rest is on the JS Fiddle. Any help would be greatly appreciated. $(document).ready(function () { $(".slider #1").fadeIn(1000); $(".slider #1").delay(2000).fadeOut(1000); var sc = $(".slider img").size(); var count = 2; setInterval(function () { $(".slider #" + count).fadeIn(1000); $(".slider #" + count).delay(2000).fadeOut(1000); if (count === sc) { count = 1; } else { count++; } }, 3500); $(".sliderTwo #7").fadeIn(1000); $(".sliderTwo #7").delay(2000).fadeOut(1000); var sc2 = 12; var count2 = 7; setInterval(function () { $(".sliderTwo #" + count2).fadeIn(1000); $(".sliderTwo #" + count2).delay(2000).fadeOut(1000); if (count2 === sc2) { count2 = 7; } else { count2++; } }, 3500); }); http://jsfiddle.net/gg4PL/

    Read the article

  • Sending a file to an API - C#

    - by alex
    I'm trying to use an API which sends a fax. I have a PHP example below: (I will be using C# however) <?php //This is example code to send a FAX from the command line using the Simwood API //It is illustrative only and should not be used without the addition of error checking etc. $ch = curl_init("http://url-to-api-endpoint"); $fax_variables=array( 'user'=> 'test', 'password'=> 'test', 'sendat' => '2050-01-01 01:00', 'priority'=> 10, 'output'=> 'json', 'to[0]' => '44123456789', 'to[1]' => '44123456780', 'file[0]'=>'@/tmp/myfirstfile.pdf', 'file[1]' => '@/tmp/mysecondfile.pdf' ); print_r($fax_variables); curl_setopt ($ch, CURLOPT_POST, 1); curl_setopt ($ch, CURLOPT_POSTFIELDS, $fax_variables); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $result=curl_exec ($ch); $info = curl_getinfo($ch); $result['http_code']; curl_close ($ch); print_r($result); ?> My question is - in the C# world, how would I achieve the same result? Do i need to build a post request? Ideally, i was trying to do this using REST - and constructing a URL, and using HttpWebRequest (GET) to call the API

    Read the article

  • AutoMapper strings to enum descriptions

    - by 6footunder
    Given the requirement: Take an object graph, set all enum type properties based on the processed value of a second string property. Convention dictates that the name of the source string property will be that of the enum property with a postfix of "Raw". By processed we mean we'll need to strip specified characters e.t.c. I've looked at custom formatters, value resolvers and type converters, none of which seems like a solution for this? We want to use AutoMapper as opposed to our own reflection routine for two reasons, a) it's used extensively throughout the rest of the project and b) it gives you recursive traversal ootb. -- Example -- Given the (simple) structure below, and this: var tmp = new SimpleClass { CountryRaw = "United States", Person = new Person { GenderRaw="Male" } }; var tmp2 = new SimpleClass(); Mapper.Map(tmp, tmp2); we'd expect tmp2's MappedCountry enum to be Country.UnitedStates and the Person property to have a gender of Gender.Male. public class SimpleClass1 { public string CountryRaw {get;set;} public Country MappedCountry {get;set;} public Person Person {get;set;} } public class Person { public string GenderRaw {get;set;} public Gender Gender {get;set;} public string Surname {get;set;} } public enum Country { UnitedStates = 1, NewZealand = 2 } public enum Gender { Male, Female, Unknown } Thanks

    Read the article

  • How to determine which element(s) are visible in an overflowed <div>

    - by jjross
    Basically, I'm trying to implement a system that behaves similar to the reading pane that's built into the Google Reader interface. If you haven't seen it, Google Reader presents each article in a separate box and as you scroll it highlights the current box (and marks the article as read). In addition to this, you can move forward or backward in the article list by clicking the previous and next buttons in the UI. I've basically figured out how to do most of the functionality. However, I'm not sure how I can determine which of my divs is currently visible in in the scrollable pane. I have a div that is set to overflow:auto. Inside of this div, there are other divs, each one containing a piece of content. I've used the following jquery plugin to make everything scroll based on a click of the "next" or "previous" button and it works like a charm: http://demos.flesler.com/jquery/serialScroll/ But I can't tell which div has "focus" in the scrollable pane. I'd like to be able to do this for two reasons. I'd like to highlight the item that the user is currently reading (similar to Google Reader). I need to do this regardless of whether or not they used the plugin to get there or used the browser's scroll bar. I need to be able to tell the plugin which item has focus so that my call to scroll to the "next" pane actually uses the currently viewed pane (and not just the previous pane that the plugin scrolled from). I've tried doing some searching but I can't seem to figure out a way to do this. I found lots of ways to scroll to a particular item, but I can't find a way to determine which element is visible in an overflowed div. If I can determine which items are visible, I can (probably) figure out the rest. I'm using jquery if that helps. Thanks!

    Read the article

  • multiple touches: touchend event fired only when a touchmove occurs

    - by dridri
    I would like to add some multitouch features to my javascript application when it is accessed from an ios device (and maybe android later). I want to provide a shiftkey-like functionality: the user may hold a button on the screen with one finger, and while this button is pressed, the behavior for a tap action on the rest of the screen is slightly different from the classic tap. The problem i'm running into is that i do not receive any touchend event for the tapping finger unless a touchmove is fired for the first finger holding the shiftkey button. Because the screen is very sensitive, touchmove events gets easily fired and in most cases everything works fine. But when the user's finger is a bit too still, the tapping is not detected until the user moves his finger a bit. This induces a variable 'delay' between the tapping and the action that occurs on the screen (the delay may vary and last a few seconds if the user is very calm). My guess is that this delay will cause the user to tap again and thus fire the action a second time, which is something that i don't want ! You can test it here with your ipad/iphone : http://jsfiddle.net/jdeXH/8/ Try to make the body remain green for a few seconds by holding your finger very still on the cyan div while tapping on the red div. Is this behavior to be expected ? Is there some known workaround for the problem ? I would have expected the touchend event to be fired right away when the finger is removed from the screen. i tested this with iOS 5.1.1 (ipad1 and iphone4s) edit: found a similar question Multitouch touchEvents not triggered as they should on Safari Mobile

    Read the article

  • ajax panel update during the middle of a function C# ASP.net site

    - by user2615302
    ajax panel update during the middle of a function C# ASP.net site This is the button click. I would like to update LbError.Text to "" before the rest of the function continues. This is my current code. protected void BUpload_Click(object sender, EventArgs e) { LbError.Text = ""; UpdatePanel1.Update(); //// need it to update here before it moves on but it waits till the end to update the lablel Exicute functions..... ....... ....... } <asp:UpdatePanel ID="UpdatePanel1" runat="server" UpdateMode="Conditional"> <contenttemplate> <asp:Label ID="LbError" runat="server" CssClass="failureNotification" Text=""></asp:Label> <br /><br /> <br /><br /> <asp:TextBox ID="NewData" runat="server"></asp:TextBox><br /> then click <asp:Button ID="BUpload" runat="server" Text="Upload New Data" onclick="BUpload_Click"/><br /> Things i have tried include have another UpdatePanel just and same results. any help would be greatly appreciated.

    Read the article

  • Cannot get list of elements using Linq to XML

    - by Blackator
    Sample XML: <CONFIGURATION> <Files> <File>D:\Test\TestFolder\TestFolder1\TestFile.txt</File> <File>D:\Test\TestFolder\TestFolder1\TestFile01.txt</File> <File>D:\Test\TestFolder\TestFolder1\TestFile02.txt</File> <File>D:\Test\TestFolder\TestFolder1\TestFile03.txt</File> <File>D:\Test\TestFolder\TestFolder1\TestFile04.txt</File> </Files> <SizeMB>3</SizeMB> <BackupLocation>D:\Log backups\File backups</BackupLocation> </CONFIGURATION> I've been doing some tutorials but I am unable to get all the list of file inside the files element. It only shows the first element and doesn't display the rest. This is my code: var fileFolders = from file in XDocument.Load(@"D:\Hello\backupconfig1.xml").Descendants("Files") select new { File = file.Element("File").Value }; foreach (var fileFolder in fileFolders) { Console.WriteLine("File = " + fileFolder.File); } How do I display all the File in the Files element, the SizeMB and BackupLocation? Thanks

    Read the article

  • CSS challenge: Two background images, centered column with fixed with, min-height 100%

    - by laurent
    In a nutshell I need a CSS solution for the following requirements: Layout: One centered column with fixed width and a minimum height of 100% Two vertically repeated background images behind the centered column, one aligned to the left, one aligned to the right Cross browser compatibility A little more details Today a new requirement for my current web site project came up: A background image with gradients on the left and right side. The challenge is now to specify two different background images while keeping the rest of the layout spec. Unfortunately the (simple) layout somehow doesn't go with the two backgrounds. My layout is basically one centered column with fixed width: #main_container { margin: 0 auto; min-height: 100%; width: 800px; } Furthermore it's necessary to stretch the column to a minimum height of 100%, since there are quite some pages with only little content. The following CSS styles take care of that: html { height: 100%; } body { margin: 0; height: 100%; padding: 0; } So far so good - until the two background image issue arrived... I tried the following solutions Two absolute positioned divs behind the main container One image defined with the body, one with the html CSS class One image defined with the body, the other one with a large div begind the main container With either one of them, the dynamic height solution was ruined. Either the main container didn't stretch to 100% when it was too small, or the background remained at 100% when the content was actually longer

    Read the article

  • RDS installation failure on 2012 R2 Server Core VM in Hyper-V Server

    - by Giles
    I'm currently installing a test-bed for my firms Infrastructure replacement. 10 or so Windows/Linux servers will be replaced by 2 physical servers running Hyper-V server. All services (DC, RDS, SQL) will be on Windows 2012 R2 Server Core VMs, Exchange on Server 2012 R2 GUI, and the rest are things like Elastix, MailArchiver etc, which aren't part of the equation thus far. I have installed Hyper-V server on a test box, and sucessfully got two virtual DC's running, SQL 2014 running, and 8.1 which I use for the RSAT tools. When trying to install RDS (The old fashioned kind, not the newer VDI(?) style), I get a failed installation due to the server not being able to reboot. A couple of articles have said not to do it locally, so I've moved on. Sitting at the Powershell prompt on the Domain Controller or SQL server (Both Server Core), I run the following commands: Import-Module RemoteDesktop New-SessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost "AlstersTS.Alsters.local" The installation begins, carries on for 2 or 3 minutes, then I receive the following error message: New-SessionDeployment : Validation failed for the "RD Connection Broker" parameter. AlstersTS.Alsters.local Unable to connect to the server by using WindowsPowerShell remoting. Verify that you can connect to the server. At line:1 char:1 + NewSessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost " ... + + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException + FullyQualifiedErrorID : Microsoft.PowerShell.Commands.WriteErrorException,New-SessionDeployment So far, I have: Triple, triple checked syntax. Tried various other commands, and a script to accomplish the same task. Checked DNS is functioning as it should. Checked to the best of my knowledge that AD is working as it should. Checked that the Network Service has the needed permissions. Created another VM and placed the two roles on different servers. Deleted all VMs, started again with a new domain name (Lather, rinse, repeat) Performed the whole installation on a second physical box running Hyper-V Server Pleaded with it Interestingly, if I perform the installation via a GUI installation, the thing just works! Now I know I could convert this to a Server Core role after installation, but this wouldn't teach me what was wrong in the first instance. I've probably got 10 pages through various Google searches, each page getting a little less relevant. The closest matches seem to have good information, but it doesn't seem to be the fix for my set-up. As a side note, I expected to be able to "tee" or "out-file" the error message into a text file, but couldn't get that to work either, so I've typed in the error message manually. Chaps, any suggestions, from the glaringly obvious, to the long-winded and complex? Thanks!

    Read the article

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

  • windows firewall broken on server 2008

    - by Chloraphil
    This evening I tried to rdp into my server 2008 box and was unable to. After poking around some I discovered that something is awry with my Windows Firewall. I did install 5 windows updates remotely earlier today but rolled those back in an attempt to see if that fixed the problem but had no luck. Symptoms: cannot rdp to machine (including from itself) cannot ping machine cannot connect to file share on machine error message when attempting to open "windows firewall with advanced security" snap-in (there was an error opening the windows firewall with advanced security snap-in ... The Windows Firewall with Advanced Security snap-in failed to load. Restart the windows firewall service on the computer that you are managing. Error code: 0x6D9. When I opened the "user-friendly" Windows Firewall it failed to load most of the gui elements, meaning, the title bar with close, minimize, and maximize buttons is present, the rest of the window has a white background with a yellow rectangle with rounded corners and a yellow triangle w/ an exclamation point is in the upper right. hope that made sense "Windows Firewall" does not appear in the list of services I ran a virus scan that found nothing. How do I fix the firewall and hopefully restore the ability to rdp? EDIT: Added at fission's request: c:\sc query mpsdrv SERVICE_NAME: mpsdrv TYPE : 1 KERNEL_DRIVER STATE : 4 RUNNING (STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN) WIN32_EXIT_CODE : 0 (0x0) SERVICE_EXIT_CODE : 0 (0x0) CHECKPOINT : 0x0 WAIT_HINT : 0x0 c:\sc query mpssvc SERVICE_NAME: mpssvc TYPE : 20 WIN32_SHARE_PROCESS STATE : 1 STOPPED WIN32_EXIT_CODE : 1068 (0x42c) SERVICE_EXIT_CODE : 0 (0x0) CHECKPOINT : 0x0 WAIT_HINT : 0x0 Those two registry keys do exist: HKLM\SYSTEM\CurrentControlSet\Services\mpsdrv & HKLM\SYSTEM\CurrentControlSet\Services\MpsSvc ! The problem seems to be with the Base Filtering Engine, when I try to start it I get the following error: Windows could not start the Base Filtering Engine service on MYCOMPUTER. Error 15100: The resource loader failed to find MUI file. EDIT2: I ran sfc /scannow and i found about 100 occurrences of "[SR] Cannot repair member file"... including several related to the firewall (ex: [l:32{16}]"Firewall.cpl.mui" of Networking-MPSSVC.Resources...). One of them mentioned wordpad.exe, which I tried to open, and it failed. I found here mentions of mounting the install.wim on the install media to copy the affected files over. I am downloading the appropriate AIK and will continue tomorrow evening.

    Read the article

  • 500 internal server error on certain page after a few hours

    - by Brian Leach
    I am getting a 500 Internal Server Error on a certain page of my site after a few hours of being up. I restart uWSGI instance with uwsgi --ini /home/metheuser/webapps/ers_portal/ers_portal_uwsgi.ini and it works again for a few hours. The rest of the site seems to be working. When I navigate to my_table, I am directed to the login page. But, I get the 500 error on my table page on login. I followed the instructions here to set up my nginx and uwsgi configs. That is, I have ers_portal_nginx.conf located i my app folder that is symlinked to /etc/nginx/conf.d/. I start my uWSGI "instance" (not sure what exactly to call it) in a Screen instance as mentioned above, with the .ini file located in my app folder My ers_portal_nginx.conf: server { listen 80; server_name www.mydomain.com; location / { try_files $uri @app; } location @app { include uwsgi_params; uwsgi_pass unix:/home/metheuser/webapps/ers_portal/run_web_uwsgi.sock; } } My ers_portal_uwsgi.ini: [uwsgi] #user info uid = metheuser gid = ers_group #application's base folder base = /home/metheuser/webapps/ers_portal #python module to import app = run_web module = %(app) home = %(base)/ers_portal_venv pythonpath = %(base) #socket file's location socket = /home/metheuser/webapps/ers_portal/%n.sock #permissions for the socket file chmod-socket = 666 #uwsgi varible only, does not relate to your flask application callable = app #location of log files logto = /home/metheuser/webapps/ers_portal/logs/%n.log Relevant parts of my views.py data_modification_time = None data = None def reload_data(): global data_modification_time, data, sites, column_names filename = '/home/metheuser/webapps/ers_portal/app/static/' + ec.dd_filename mtime = os.stat(filename).st_mtime if data_modification_time != mtime: data_modification_time = mtime with open(filename) as f: data = pickle.load(f) return data @a bunch of authentication stuff... @app.route('/') @app.route('/index') def index(): return render_template("index.html", title = 'Main',) @app.route('/login', methods = ['GET', 'POST']) def login(): login stuff... @app.route('/my_table') @login_required def my_table(): print 'trying to access data table...' data = reload_data() return render_template("my_table.html", title = "Rundata Viewer", sts = sites, cn = column_names, data = data) # dictionary of data I installed nginx via yum as described here (yesterday) I am using uWSGI installed in my venv via pip I am on CentOS 6 My uwsgi log shows: Wed Jun 11 17:20:01 2014 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET /whm-server-status (127.0.0.1) IOError: write error [pid: 9586|app: 0|req: 135/135] 127.0.0.1 () {24 vars in 292 bytes} [Wed Jun 11 17:20:01 2014] GET /whm-server-status => generated 0 bytes in 3 msecs (HTTP/1.0 404) 2 headers in 0 bytes (0 switches on core 0) When its working, the print statement in the views "my_table" route prints into the log file. But not once it stops working. Any ideas?

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • Allow anonymous upload for Vsftpd?

    - by user15318
    I need a basic FTP server on Linux (CentOS 5.5) without any security measure, since the server and the clients are located on a test LAN, not connected to the rest of the network, which itself uses non-routable IP's behind a NAT firewall with no incoming access to FTP. Some people recommend Vsftpd over PureFTPd or ProFTPd. No matter what I try, I can't get it to allow an anonymous user (ie. logging as "ftp" or "anonymous" and typing any string as password) to upload a file: # yum install vsftpd # mkdir /var/ftp/pub/upload # cat vsftpd.conf listen=YES anonymous_enable=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root #directory was created by root, hence owned by root.root anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #chroot_local_user=NO #chroot_list_enable=YES #chroot_list_file=/etc/vsftpd.chroot_list chown_uploads=YES When I log on from a client, here's what I get: 500 OOPS: cannot change directory:/var/ftp/pub/incoming I also tried "# chmod 777 /var/ftp/incoming/", but get the same error. Does someone know how to configure Vsftpd with minimum security? Thank you. Edit: SELinux is disabled and here are the file permissions: # cat /etc/sysconfig/selinux SELINUX=disabled SELINUXTYPE=targeted SETLOCALDEFS=0 # sestatus SELinux status: disabled # getenforce Disabled # grep ftp /etc/passwd ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin # ll /var/ drwxr-xr-x 4 root root 4096 Mar 14 10:53 ftp # ll /var/ftp/ drwxrwxrwx 2 ftp ftp 4096 Mar 14 10:53 incoming drwxr-xr-x 3 ftp ftp 4096 Mar 14 11:29 pub Edit: latest vsftpd.conf: listen=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root anonymous_enable=YES anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #500 OOPS: bad bool value in config file for: chown_uploads chown_uploads=YES chown_username=ftp Edit: with trailing space removed from "chown_uploads", err 500 is solved, but anonymous still doesn't work: client> ./ftp server Connected to server. 220 (vsFTPd 2.0.5) Name (server:root): ftp 331 Please specify the password. Password: 500 OOPS: cannot change directory:/var/ftp/pub/incoming Login failed. ftp> bye With user "ftp" listed in /etc/passwd with home directory set to "/var/ftp" and access rights to /var/ftp set to "drwxr-xr-x" and /var/ftp/incoming to "drwxrwxrwx"...could it be due to PAM maybe? I don't find any FTP log file in /var/log to investigate. Edit: Here's a working configuration to let ftp/anonymous connect and upload files to /var/ftp: listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES anon_mkdir_write_enable=YES

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • Ubuntu 12.04: apt-get "failed to fetch"; apt is trying to fetch via old static IP

    - by gabe
    Sample error: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise-security/universe/i18n/Translation-en Unable to connect to 192.168.1.70:8118: Now this was working just fine until I changed the IP this morning. I have the server set to a static IP of 10.0.1.70 and for years it has been 192.168.1.70 - the IP apt-get is trying to use right now. I use privoxy and tor thus the 8118 port. Like I said it all worked until I changed the static IP from 192.168.1.70 to 10.0.1.70. I was forced to do so because of router issues. (Long and involved story, I didn't really want to change the IP because I know something like this would happen.) The setup for TOR/Privoxy requires that has you point Privoxy at TOR via 127.0.0.1:9050. Then point curl, etc to Privoxy via $HOME/.bashrc. Typically you would set the listen to IP for Privoxy to 127.0.0.1 but if you want it accessible to the rest of the LAN you set the IP to the server's LAN IP. Which I did a long time ago and was working fine until this morning. I have changed all instances of 192.168.1.70 to 10.0.1.70 in both /etc/privoxy/config and $HOME/.bashrc. What makes this really strange for me is that curl is working fine. I curl icanhazip.com and voila I get a new IP every 10 minutes or so. I curl CNN.com and I get the short but sweet permanently moved to www.cnn.com message I expect. Firefox works fine. Ping works fine. And I've tested all of this via Remote Desktop over my LAN. So the connection appears to be fine for everything except apt. I've also rebooted hoping that would clear 192.168.1.70 from apt. So the connection to the internet and DNS aren't an issue for these programs. And they are, as far as I can tell, using Privoxy/TOR just fine. The real irony here is that I've tried to open up Privoxy to go to Ubuntu's servers directly without going through TOR to speed up the downloads from Ubuntu (did this months ago). So somewhere that I have not been able to find, apt has stored the IP 192.168.1.70. And 192.168.1.70 is no longer valid. Thanks for the help

    Read the article

  • OpenVPN, Great on Windows, VERY slow on Mac...

    - by Phsion
    Hello, I'm not really an IT Pro, but this seemed like the best place to ask this question... I have setup VPN networks in the past, for fun, and everything was great, but now I've set one up for my boss, and while my computers all work great, his Mac machines are almost too slow to work with. Its pretty much vanilla configs all around, anyone have any ideas? Its a TUN routing setup over UDP. Back Story: My boss travels a lot, and wants to be able to access all his files from the road, and is also pretty paranoid about security (even though knows almost nothing about computers). SO i figured a VPN would be the answer. I went with OpenVPN, but there are some other issues. The only ISP we can get in our area besides Dial-UP is a crappy Satellite provider, that doesn't offer public IPs unless your willing to pay, so while the computers and VPN setup are pretty vanilla, the routing and structure is strange to get around this limitation. Specs: Its OpenVPN2, and there are six machines using it (only three actually use it, the rest are my test machines), one Windows 7 laptop, two XP Desktops, one OS X 10.5 Desktop, one 10.6 Desktop, and one 10.6 Laptop. One XP Desktop sits at my house and acts as the server (6Mbs/2Mbs FIOS connection). One XP desktop sits at the office and hosts a webpage that will wake up the Main Mac Desktop from sleep, and also ping all the machines on the VPN and show their status. The main office mac (10.6) stays in sleep mode until it gets the Wake-On-Lan packet from the Office XP, and then it auto connects to the VPN and opens itself up. The reason for all this is the Satellite private IP crap means i cant directly access the office machines outside of the LAN, so everyone connects to my house first, then they talk to each other from there. The Wake On Lan weirdness is because my boss doesn't want to leave the main Mac on all the time, and making a quick and dirty webpage was the easiest way to send a Magic Packet from inside the LAN without confusing my boss. The VPN uses Client Config files to make static IPs for the client. The only thing i found in google was some changes to the VPN MTU settings (down to 1400) but no real help. Oh, and i forgot...all the windows machines just have OpenVPN start as a service. The Mac laptop uses tunnelblick (an OpenVPN GUI) and the Mac Desktops use OpenVPN in normal command line mode. Server Config: tun-mtu 1500 fragment 1450 mssfix 1450 management localhost #### port #### proto udp dev tun ca ####### cert ####### key ###### dh ###### server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-config-dir ccd route 10.8.0.0 255.255.255.252 client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status log Client Configs (all are simple variations on this) tun-mtu 1500 fragment 1450 mssfix 1450 client dev tun proto udp remote ######## #### resolv-retry infinite nobind persist-key presist-tun ca ##### cert ##### key ##### ns-cert-type server comp-lzo verb 3

    Read the article

  • Profiles and using the local profile for a domain user

    - by Harry
    I’m having some trouble with profiles and would like to reach out for some help. I’ve tried to do some research to help myself along, but I’m not making much progress on my own. I’ve pretty much taken over the sys admin duties for my small lab, I don’t have much experience to justify it besides I’m the only with the time and dedication to go at it (The environment was in a state of disrepair). My network and domain I look over are extremely small by most standards, about 10 users at a time. They are pretty intensive activity on the network, and we do work with fairly large files. None of the network is online, which is nice at the moment because it allows me not to have another headache. On to my profile problem, I have set up roaming profiles for the users in the network. Now after a little research, I think I will be switching this to a hybrid of folder redirection and roaming profiles as this seems to best practice. I also don’t want the users having to wait for a long time if they have a bloated profile. Now I’ve finally got a build working using MDT. We have Mac Pros, and it wasn’t fun getting everything to play nice. The way I did this was by setting up a reference computer and installing all the software and tools that each user would need and editing the settings preferences to how we would need them. I think used MDT to do a sys prep and capture to create the image of my reference computer. Using the reference image I can push out my images to the rest of the desktops in my environment. The issue I’m having is when we join the computer to domain. The user can login and operate fine on the computer, but I’d like a more. When the user is logged on with their domain user name they lose a lot of the icons I had on my reference image, as well as the desktop background and some other miscellaneous settings. I would love to have the user log on using their domain user name and see the icons and desktop environment as I had it setup on the reference computer. I’m not sure if it is possible, or something simple that I’m missing, but any help would be greatly appreciated!

    Read the article

  • Port forwarding problem

    - by Steve
    I have a modem connecting to ADSL2 network and a router connecting to the modem. The rest of the machines all connect to the router. The modem has IP as 192.168.1.1 and the router's IP is 192.168.0.1. From the modem configuration, I can see that the modem thinks the router's IP is 192.168.1.2. I can visit the router by either using 192.168.0.1 or 192.168.1.2. Now I forward a port from the router to a private machine. It works. I can test it by typing 192.168.1.2 and it is redirected to the private machine. But if I use 192.168.0.1, it is still the router's configuration page. I also do a port forwarding on my modem. Since the modem sees only the router, I can only forward the port to the router's specific port. And I am thinking that by doing this, I can reach the private machine after two times port forwarding, once on the modem and once on the router. I also have a static public IP. I want to achieve the goal that when someone types the public IP, he will be redirected to the private machine. But when I use some online port forwarding tester, the result always says that the port is closed on the public IP. I have the questions: Why my router has two IPs? Why using one IP I can see the port forwarding result while using the other I cannot? I think the port forwarding only works when visiting from outside, rather than from both outside and inside. Otherwise, if I set port forwarding on my router/modem on port 80, I will never be able to see its original configuration page again. Everything is forwarded. Am I right? How can I achieve my goal described above? By achieve this, I will have a dedicated server of my own and the users can visit from the public IP. Anyone can correct me on any mistakes I made? I am using Netconn modem and D-Link DIR-300 router. Thank you very much for any help. Edit: Consider I have correctly setup the whole thing. Now I want to test my website by using public IP to visit it, but the port forwarding doesn't work. Does it consider that I am inside the local network and not using the port forwarding? If so, how can I do it? I ask my friends (outside my local network) to have a try and they can see the website. What should I do so that from the inside, I can do the testing? Thank you very much.

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • Share 3G connection over WiFi-LAN network

    - by kush.impetus
    This is how I have established network between my PC and my laptop at home (being novice in networking, it took me few days to achieve the feat). And it is working perfectly. I can easily share files between them. Laptop IP Address: 192.168.1.4 Subnet mask: 255.255.255.0 Default Gateway: 192.168.1.2 Desktop IP Address: 192.168.1.5 Subnet mask: 255.255.255.0 Default Gateway: 192.168.1.2 ASUS RT-N10+ Router IP Address: 192.168.1.4 Default Gateway: 192.168.1.2 I have connected the Desktop PC to the router using a LAN cable, and laptop to router over WiFi. Both, PC and laptop are running on Windows 7 OS, are on same HomeGroup, have same username / password. Also, I have connected the Ethernet cable to LAN port 1 of the router. Click here to view a graphical representation of the network. Can't post image here, because I don't have 10 reputation points. Now, what I want is use connect to Internet using a 3G USB modem on one device and share it over the network on the other. I tried Huawei and Micromax 3G USB modem. Both obtain a new IP address whenever I connect to Internet (means they have dynamic IPs). Rest, both have Subnet Mask as 255.255.255.255 and Default Gateway as 0.0.0.0. In that case, I cannot directly share Internet from the modem. Preferred DNS is blank for now in both, laptop and PC. What I am planning to do is to connect to Internet on laptop using the 3G modem and share the Internet connection over laptop's Wi-Fi (as hotspot) using Connectify, which I have done already. That, I suppose, will broadcast a static IP to connect to. Now what I can't figure out is that what changes should I make to the network settings of the router and the PC so that PC connects to the Internet broadcast by Connectify? Is that possible on the first hand? Please note that I am trying to implement the network without spending anything extra (for purchasing as USB WiFi adapter for PC, of course, which could have made the life lot easier for me). Thanks in advance

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

< Previous Page | 200 201 202 203 204 205 206 207 208 209 210 211  | Next Page >