Search Results

Search found 22829 results on 914 pages for 'nautilus script'.

Page 296/914 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Why does Google Analytics use two domains?

    - by AKeller
    I'm building a distributed widget that is comparable to Google Analytics. Users will add a <script> tag to their site that references my widget's JavaScript file. The Google Analytics tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXXXXX-X']); _gaq.push(['_trackPageview']); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Can anyone explain the reasoning behind separate HTTP and HTTPS hostnames? My instinct is to just secure the www address and then use the protocol-less syntax, like //www.google-analytics.com/ga.js. But I'm sure the Google Analytics architects put a lot of thought into this approach. I'd love to understand their logic before I follow/ignore their model.

    Read the article

  • Mapped network drive connection timeout

    - by Terix
    I have server "Alpha" and server "Beta". Server Beta has a shared folder, that is mapped on server Alpha as "X:" On server Alpha there is a .vbs script that runs and take some files on local drive and copy them on X: drive. My issue is if no user log on server Alpha for a long time, it seems like the tcp connection underneath the mapped drive has a timeout, and the vbs script fails on the copy of file. As soon I log with remote desktop on Alpha server, the .vbs is successfull on the copy of the files. I have made many tests, using file logs to check what was happening and I found no way to refresh the connection and let the .vbs be able to copy files unattended.. I have always to log with remote desktop on Alpha server to refresh the connection and let the .vbs copy the files without issues. What can I do to avoid to log every time? The .vbs script runs 3 times a day and is very annoying. I do not have control over server Beta so I cannot change anything there, and I am very limited on changes I can do on server Alpha ( I cannot change registry and that sort of things)

    Read the article

  • Apache memory allocation error message

    - by la_f0ka
    I'm trying to set up a medium sized Drupal 7 website on my miniserver but I keep getting a 500 error message. This is what I found in Apache's error log: [Wed Sep 12 15:02:04 2012] [notice] SSL FIPS mode disabled [Wed Sep 12 15:02:04 2012] [warn] No JkShmFile defined in httpd.conf. Using default /usr/local/apache/logs/jk-runtime-status [Wed Sep 12 15:02:04 2012] [notice] Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_jk/1.2.35 configured -- resuming normal operations [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] File does not exist: /home/brighton/public_html/favicon.ico [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] /usr/bin/php: error while loading shared libraries: libkrb5support.so.0: failed to map segment from shared object: Cannot allocate memory [Wed Sep 12 15:02:07 2012] [error] [client 89.16.136.28] Premature end of script headers: index.php I contacted support and they just told me I should just upgrade my package (right not I have a 512Mb account), but I am not sure if I'm buying it... even if I'm trying to access a file which only contains phpinfo(); I still get the 500. Any help would be much appreciated, and if there's need of any other information please let me know and I'll update the question. I compiled apache with tomcat because I intend to use Solr... not sure if this is relevant or not.

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • Facter - custom fact, returns empty data set when invoked by Puppet agent

    - by user3684494
    According to this puppet labs article, I can create custom facts from shell scripts. I have created a bash script that returns a single fact, it is packaged in a modules facts.d directory. The module is included on the target system via an ENC class. When invoked by the puppet agent on the target it returns an empty set, when run by hand on the agent it correctly returns the fact. The script has execute permission on the master, but does not have it on the agent. I saw a bug report related to permissions and file types, but that was windows and supposed to be fixed in puppet version 3. What am I doing wrong? ENC definition: --- classes: facttest: Shell script: #!/bin/bash echo "test_fact1=$(hostname)" Permissions: master: -rwxr-xr-x 1 root root ... modules/facttest/facts.d/testfact.sh agent: -rw-r--r-- 1 root root ... /var/lib/puppet/facts.d/testfact.sh Agent message: Fact file /var/lib/puppet/facts.d/testfact.sh was parsed but returned an empty data set Version information: Puppet master: 3.5.1 (Debian) Facter master: 2.0.1 Puppet agent: 3.6.1 (OpenSUSE) Facter agent: 2.0.1

    Read the article

  • How do I configure IIS to allow access to network resources for PHP scripts?

    - by Dereleased
    I am currently working on a PHP front-end that joins together a series of applications running on separate servers; many of these applications generate files that I need access to, but these files (for various reasons) reside on their parent servers. If I, from the command line, issue a bit of script such as: <?php var_dump(glob("\\\\machine-name\\some\\share\\*")); I will get the full contents of that directory, proving that there's no problem programmatically with PHP reading the contents of a UNC share. However, if I try to execute the same script from the web server, I get an empty array -- more specifically, if I use more explicitly functions designed to "open" a directory like it was a file, I get access errors. I believe this to be a permissions issue, but I am not a server/network administrator type, so I'm not sure what I need to do to correct this and get my script running, and the links I've checked out have not been a terrible amount of help, perhaps due to my background, or lack thereof as far as IIS is concerned, coupled with the fact that we are not actually using .NET for this. Relevant Stats: Windows Server 2008 Standard SP2 IIS 7.0 PHP 5.2.9 I will be connecting to two types of servers: a few other nearly-identical Server 2008 machines, and a machine running embedded XP. Links that have not been particularly helpful but maybe I am just misreading: http://support.microsoft.com/?id=306158 http://support.microsoft.com/kb/207671/EN-US/ http://support.microsoft.com/kb/280383/

    Read the article

  • upstart scripts: run a task after networking goes up

    - by The Journeyman geek
    I'm working on moving my current server setup to newer hardware, and migrating from ubuntu karmic koala to lucid lynx. Currently i'm using gw6c (compiled from the gogo6 website, as opposed to the version from the repositories) to get ipv6 access for my systems. On the karmic koala system, i used simple init.d script to get the ipv6 client started #! /bin/sh /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf I figured since this runs at any runlevel, it should translate to respawn console none start on startup stop on shutdown script exec /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf emit free6_ipv6_started end script this works fine started from initctrl, but it apparently fails to start when it boots. - its status being stop/waiting. It works fine (and respawns) when started otherwise.Any ideas on where i'm going wrong, and what would be the appropriate 'start on' arguement? EDIT: the exact error is 'init: gw6c main process (xxx) ended with status 8' followed by the process respawning , with xxx being a PID i suspect. I'm also half suspecting this is cause gw6c starts before networking does, and i need my eth0 up before gw6c is

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • What is the --daemon option?

    - by Pascal Dimassimo
    I was installing Solr with Jetty using these instructions. Basically, those instructions made you download the Jetty startup script and copy it to /etc/init.d/jetty. But it was not working. Each time I was starting Jetty, I had a "FAILED" message and nothing to understand why it was happening. I decided to open up the /etc/init.d/jetty script to understand what was happening. I saw that this script was using start-stop-daemon to launch jetty. After a couple of time of debugging, I discovered that removing the --daemon option at the end of the start-stop-daemon call was fixing my problem. I did a couple of research and discovered that this guy had the same problem and resolved it like I did: my removing the --daemon option. What is weird is that the switch does not seem to be specific to start-stop-daemon, because it is not documented in the man page. Also, I've seen it used for other commands. So what is that --daemon option doing? And why removing it resolved my problem? Note that I am working on Ubuntu 10.04.2 LTS.

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

  • Unable to create files in a directory

    - by vamsi360
    I have created a directory in Virutalbox. Using VBoxManage, I am executing a script inside the Ubuntu VM directory I created above from Ubuntu host OS. But if the script in the VM contains commands for creating a new file, they are not executing. "echo" commands before and after the touch ommand are working fine. I even used root user for VBoxManage to install. I think the directory is not allowing the files to be created . How can I make a directory in Linux to be 777 to all new files created automatically. I mean, even if I make the directory (chmod 777 dir), I am unable to execute the script from the host. Please help. It may be simple permissions problem. Even root is unable to execute. VBoxManage guestcontrol "Ubuntu_10_04" execute --image "/bin/bash" "/home/cloudlet/Desktop/temp2/three" --username root --password root --verbose --wait-exit --wait-stdout -- -l /usr Please help. I am struggling with this problem for the past one week.

    Read the article

  • How do I find and kill a php loop (process)?

    - by Hoytman
    I have a php script that I have been developing which calls an external api within a loop. It is being tested on a VPS which is running LAMP on Debian. I noticed this morning that the api was not responding to my script. When I called the provider, they told me that my server had been calling the api 1000's of times per hour for the past 10 hours. I am assuming that a php script (which I have been working on the day before and testing on my VPS) entered into an infinite loop during one of the executions, and never came out (I have been testing it from the command prompt, and not over the web.) I have attempted to stop and start Apache, but the api support staff says that the calls are still coming in from my server address. How can I find and stop the process? Also, is there a possibility that the Apache stop/start solved the problem, but the api is still trying to sort through past calls? Please forgive me for not using my local test environment correctly. Edit: I do not know the process name, I need to discover the name (or pid) based on behavior.

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • How to fix "The connection was restarted"?

    - by Altar
    I have a php script with ajax. Few days ago, without making any changes, the script stopped working. The script is working sometimes but it is very slow. Other times it doesn't load completely. Yet other times it loads and empty page or it shows a "The connection was restarted" message. We have tried loading the page from 4 different computers in two different countries (two different ISPs). There is nothing relevant in the server's log. We contacted iPage.com hosting support. We sent screenshots. And all we manage to get from them was an incomplete screenshot and the message 'We tested. The page loads'. So, they won't admit the fault. It looks like they loaded the page once and declared it is working and went back to playing solitaire. I know their support. We had all kind of problems with iPage hosting. My questions are: 1. There is any way to make this error more easy to reproduce? I mean instead of fixing the error to get it appear more often. 2. There is any way to test a server response, to see if it drops connections?

    Read the article

  • how to export VARs from a subshell to a parent shell?

    - by webwesen
    I have a Korn shell script #!/bin/ksh # set the right ENV case $INPUT in abc) export BIN=${ABC_BIN} ;; def) export BIN=${DEF_BIN} ;; *) export BIN=${BASE_BIN} ;; esac # exit 0 <- bad idea for sourcing the file now these VARs are export'ed only in a subshell, but I want them to be set in my parent shell as well, so when I am at the prompt those vars are still set correctly. I know about . .myscript.sh but is there a way to do it without 'sourcing'? as my users often forget to 'source'. EDIT1: removing the "exit 0" part - this was just me typing without thinking first EDIT2: to add more detail on why do i need this: my developers write code for (for simplicity sake) 2 apps : ABC & DEF. every app is run in production by separate users usrabc and usrdef, hence have setup their $BIN, $CFG, $ORA_HOME, whatever - specific to their apps. so ABC's $BIN = /opt/abc/bin # $ABC_BIN in the above script DEF's $BIN = /opt/def/bin # $DEF_BIN etc. now, on the dev box developers can develop both ABC and DEF at the same time under their own user account 'justin_case', and I make them source the file (above) so that they can switch their ENV var settings back and forth. ($BIN should point to $ABC_BIN at one time and then I need to switch to $BIN=$DEF_BIN) now, the script should also create new sandboxes for parallel development of the same app, etc. this makes me to do it interactively, asking for sandbox name, etc. /home/justin_case/sandbox_abc_beta2 /home/justin_case/sandbox_abc_r1 /home/justin_case/sandbox_def_r1 the other option i have considered is writing aliases and add them to every users' profile alias 'setup_env=. .myscript.sh' and run it with setup_env parameter1 ... parameterX this makes more sense to me now

    Read the article

  • Is it possible to ack nagios alerts from the terminal on a remote workstation?

    - by cat pants
    I have nagios alerts set up to come through jabber with an http link to ack. Is is possible there is a script I can run from a terminal on a remote workstation that takes the hostname as a parameter and acks the alert? ./ack hostname The benefit, while seemingly mundane, is threefold. First, take http load off nagios. Secondly, nagios http pages can take up to 10-20 seconds to load, so I want to save time there. Thirdly, avoiding slower use of mouse + web interface + firefox/other annoyingly slow browser. Ideally, I would like a script bound to a keyboard shortcut that simply acks the most recent alert. Finally, I want to take the inputs from a joystick, buttons and whatnot, and connect one to a big red button bound to the script so I can just ack the most recent nagios alert by hitting the button lol. (It would be rad too if the button had a screen on the enclosure that showed the text of the alert getting acked lol) Make fun of me all you want, but this is actually something that would be useful to me. If I can save five seconds per alert, and I get 200 alerts per day I need to ack, that's saving me 15 minutes a day. And isn't the whole point of the sysadmin to automate what can be automated? Thanks!

    Read the article

  • schroot build environment setup how to avoid bind-mount home

    - by minghua
    The recent linux distributions such as Fedora and Ubuntu all use chroot environment to make the build. Because when making the build often it needs to install some special tools, and to install to the existing system. Using chroot avoids making any changes to the host system. To set up such a build environment, the first step is to make a chroot. I'm following the setup guide at https://wiki.debian.org/Schroot [wheezy-test] description=Contains the SPICE program aliases=test type=directory directory=/srv/chroot/test users=jsmith root-groups=root script-config=desktop/config personality=linux preserve-environment=true In the host on my setup the /home is on /dev/mapper. When schroot is entered, the same home is bind-mounted. Is there a way to avoid this? I prefer to use a different /home inside chroot. When changing the type from directory to plain, the binding is not performed. However that also loses /proc, /sys, etc. You'd have to manually bind-mount them. That does not seem to be a good solution. If a simple configuration change is unavailable, any idea where the script is for type=directory? Probably I'll manually modify the script. Thanks in advance for any answers or hints!

    Read the article

  • Curl Error 52 Empty reply from server

    - by Paul Sheldrake
    Hello I have a cron job setup on one server to run a backup script in PHP that is hosted on another server. The command I've been using is formatted like this: curl -sS http://www.example.com/backup.php Lately I've been getting this error when the Cron runs curl: (52) Empty reply from server I have no idea what this means. If I go to the link directly in my browser the script runs fine and I get my little backup zip file. Can anyone help? Thanks, Paul

    Read the article

  • Facebook LikeBox IFrame over SSL

    - by Midday
    the iframe version of likebox is by default over http. the developer wiki on facebook says on Using the Like Box with SSL I should load the FacebookConnect script over https , I don't what the FacebookConnect script only the iframe. I found that calling https://www.facebook.com/plugins/likebox.php?#ALLMYPARAMETERS# works and doesn't break the ssl even though this is not in their wiki since this not in their wiki, will it be deprecated? or can i trust on this to work for an extended while

    Read the article

  • Attempting to update multiple partial views from a single Ajax.ActionLink

    - by mwright
    I have a a partial view which contains other partial views. I am trying to the main partial view ( "MainPartialView" ) from an Ajax.ActionLink in a partial view contained by the main partial view ( "DetailsView" ). Everything appears to be called just fine and I can step through and it executes all of the code on the pages. However, after that is all done it throws this error in a popup box in visual studio: htmlfile: Unknown runtime error This error puts the break point in the MicrosoftAjax.js file, Line 5, Col 83,632, Ch 83632. Any thoughts? Index Page: <script src="../../Scripts/MicrosoftAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.js" type="text/javascript"></script> <ul> <% foreach (DomainObject domainObject in Model) { %> <% Html.RenderPartial("MainPartialView", domainObject); %> <% } %> </ul> MainPartialView: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<DomainObject>" %> <li> <div id="<%= Model.Id%>"> <%= Ajax.ActionLink("Details", "PartialViewAction", "PartialViewController", new { id = Model.Id, }, new AjaxOptions { UpdateTargetId ="UpdateTargetId" })%> <% Html.RenderPartial("Details", Model); %> <div id="Details"></div> </div> </li> Details: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<DomainObject>" %> <% foreach ( var link in Model.Links) {%> <div> <div> <%= link.Name %> </div> <div> <%= Ajax.ActionLink("Submit this Action", "DoAction", "XTrademark", new { id = Model.TrademarkId, id2 = actionStateLink.ActionStateLinkId }, new AjaxOptions{ UpdateTargetId = Model.TrademarkId.ToString()} )%> </div> </div> <br /> <%} %>

    Read the article

  • Attempting to update partial view using Ajax.ActionLink gives error in MicrosoftAjax.js

    - by mwright
    I am trying to update the partial view ( "OnlyPartialView" ) from an Ajax.ActionLink which is in the same partial view. While executing the foreach loop it throws this error in a popup box in visual studio: htmlfile: Unknown runtime error This error puts the break point in the MicrosoftAjax.js file, Line 5, Col 83,632, Ch 83632. The page is not updated appropriately. Any thoughts or ideas on how I could troubleshoot this? It was previously nested partial views, I've simplified it for this example but this code produces the same error. Is there a better way to do what I am trying to do? Index Page: <script src="../../Scripts/MicrosoftAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.js" type="text/javascript"></script> <ul> <% foreach (DomainObject domainObject in Model) { %> <% Html.RenderPartial("OnlyPartialView", domainObject); %> <% } %> </ul> OnlyPartialView: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<ProjectName.Models.DomainObject>" %> <%@ Import Namespace="ProjectName.Models"%> <li> <div id="<%=Model.Id%>"> //DISPLAY ATTRIBUTES </div> <div id="<%= Model.Id %>ActionStateLinks"> <% foreach ( var actionStateLink in Model.States[0].State.ActionStateLinks) {%> <div id="Div1"> <div> <%= actionStateLink.Action.Name %> </div> <div> <%= Ajax.ActionLink("Submit this Action", "DoAction", "ViewController", new { id = Model.Id, id2 = actionStateLink.ActionStateLinkId }, new AjaxOptions{ UpdateTargetId = Model.Id.ToString()} )%> </div> </div> <br /> <%} %> </div> </li> Controller: public ActionResult DoAction(Guid id, Guid id2) { DomainObject domainObject = _repository.GetDomainObject(id); ActionStateLink actionStateLink = _repository.GetActionStateLink(id2); domainObject.States[0].StateId = actionStateLink.FollowingStateId; repository.AddDomainObjectAction(domainObject, actionStateLink, DateTime.Now); _repository.Save(); return PartialView("OnlyPartialView", _repository.GetDomainObject(id)); }

    Read the article

  • Python: nonblocking read from stdout of threaded subprocess

    - by sberry2A
    I have a script (worker.py) that prints unbuffered output in the form... 1 2 3 . . . n where n is some constant number of iterations a loop in this script will make. In another script (service_controller.py) I start a number of threads, each of which starts a subprocess using subprocess.Popen(stdout=subprocess.PIPE, ...); Now, in my main thread (service_controller.py) I want to read the output of each thread's worker.py subprocess and use it to calculate an estimate for the time remaining till completion. I have all of the logic working that reads the stdout from worker.py and determines the last printed number. The problem is that I can not figure out how to do this in a non-blocking way. If I read a constant bufsize then each read will end up waiting for the same data from each of the workers. I have tried numerous ways including using fcntl, select + os.read, etc. What is my best option here? I can post my source if needed, but I figured the explanation describes the problem well enough. Thanks for any help here. EDIT Adding sample code I have a worker that starts a subprocess. class WorkerThread(threading.Thread): def __init__(self): self.completed = 0 self.process = None self.lock = threading.RLock() threading.Thread.__init__(self) def run(self): cmd = ["/path/to/script", "arg1", "arg2"] self.process = subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=1, shell=False) #flags = fcntl.fcntl(self.process.stdout, fcntl.F_GETFL) #fcntl.fcntl(self.process.stdout.fileno(), fcntl.F_SETFL, flags | os.O_NONBLOCK) def get_completed(self): self.lock.acquire(); fd = select.select([self.process.stdout.fileno()], [], [], 5)[0] if fd: self.data += os.read(fd, 1) try: self.completed = int(self.data.split("\n")[-2]) except IndexError: pass self.lock.release() return self.completed I then have a ThreadManager. class ThreadManager(): def __init__(self): self.pool = [] self.running = [] self.lock = threading.Lock() def clean_pool(self, pool): for worker in [x for x in pool is not x.isAlive()]: worker.join() pool.remove(worker) del worker return pool def run(self, concurrent=5): while len(self.running) + len(self.pool) > 0: self.clean_pool(self.running) n = min(max(concurrent - len(self.running), 0), len(self.pool)) if n > 0: for worker in self.pool[0:n]: worker.start() self.running.extend(self.pool[0:n]) del self.pool[0:n] time.sleep(.01) for worker in self.running + self.pool: worker.join() and some code to run it. threadManager = ThreadManager() for i in xrange(0, 5): threadManager.pool.append(WorkerThread()) threadManager.run() I have stripped out a log of the other code in hopes to try to pinpoint the issue.

    Read the article

  • how to set style to grid in extjs

    - by vaishali
    how to set style to grid so that it display font-family i am tring like this style: {'font-family': 'Brush Script MT', 'font-weight': 'bold' } but result does not show according to it. and i m also trying style:'font-family:Brush Script MT; font-size:300px', but it also not show the result according to it.. can u pls tell me why?

    Read the article

  • Storing Data from both POST variables and GET parameters

    - by Ali
    I want my python script to simultaneously accept POST variables and query string variables from the web address. The script has code : form = cgi.FieldStorage() print form However, this only captures the post variables and no query variables from the web address. Is there a way to do this? Thanks, Ali

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >