Search Results

Search found 1351 results on 55 pages for 'aws s3'.

Page 33/55 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Run a rails server on Amazon EC2 [on hold]

    - by Jashwant
    Context: I've tried rubber gem, but that does not fulfill my requirements ( I needed to deploy on existing instance, so don't recommend me rubber) So, I followed this excellent tutorial http://stackoverflow.com/questions/15535140/installing-ruby-2-0-and-rails-4-0-0beta-on-aws-ec2 Now, I have ruby 2.0 and rails 4.0.0 running on AWS EC2. I successfully ran the server with RDS (mysql) as db and default webrick as server ( Using command rails server ) But, I've read that webrick is a development server and shouldn't be used at production. What I tried: I googled and came up with some alternatives. Capistrano Nginx / apache with passenger Passenger with Capistrano Unicorn Puma My Question: What exactly is capistrano / passenger ? Are they middleware to ease my deployment process ? I don't see any difficulty in doing rails server command. If they are just middleware, nginx with passenger and capistrano does not make any sense ? Why would I add a learning curve ( to learn nginx, passenger and capistrano configs) just to run my server ? I can just use nginx to deploy my app. Can't I ? What combination should I use on Amazon EC2 (or may be at any some other production server).

    Read the article

  • Issues Deploying Functional WAR to Elastic Beanstalk with Tomcat7

    - by BFar
    I am currently deploying OpenTripPlanner (http://github.com/OpenPlans/OpenTripPlanner.git) to Elastic Beanstalk. I'm able to successfully build and deploy opentripplanner with my own customized settings on an ec2. I have set it up so that the appropriate WAR file can be placed in the Tomcat/Webapps folder, and when Tomcat is started up, it will auto-deploy, and even download open trip planner's graph.obj from an S3. All of that works just fine, except when I try to deploy to Elastic Beanstalk. When I upload to Elastic Beanstalk, the log shows that my WAR file is successfully unpacked & successfully downloads the graph.obj from my S3. The only difference is that then nothing happens and I can't load the site in my browser. The health is RED, and I can't figure out what is going on. I've tried looking into ports and dns issues, but I can't determine what's wrong. Anyone have any ideas? Why would a WAR that works on tomcat7 outside of Beanstalk fail to be accessible?

    Read the article

  • How can I stop Excel from eating my delicious CSV files and excreting useless data?

    - by atroon
    I have a database which tracks sales of widgets by serial number. Users enter purchaser data and quantity, and scan each widget into a custom client program. They then finalize the order. This all works flawlessly. Some customers want an Excel-compatible spreadsheet of the widgets they have purchased. We generate this with a PHP script which queries the database and outputs the result as a CSV with the store name and associated data. This works perfectly well too. When opened in a text editor such as Notepad or vi, the file looks like this: "Account Number","Store Name","S1","S2","S3","Widget Type","Date" "4173","SpeedyCorp","268435459705526269","","268435459705526269","848 Model Widget","2011-01-17" As you can see, the serial numbers are present (in this case twice, not all secondary serials are the same) and are long strings of numbers. When this file is opened in Excel, the result becomes: Account Number Store Name S1 S2 S3 Widget Type Date 4173 SpeedyCorp 2.68435E+17 2.68435E+17 848 Model Widget 2011-01-17 As you may have observed, the serial numbers are enclosed by double quotes. Excel does not seem to respect text qualifiers in .csv files. When importing these files into Access, we have zero difficulty. When opening them as text, no trouble at all. But Excel, without fail, converts these files into useless garbage. Trying to instruct end users in the art of opening a CSV file with a non-default application is becoming, shall we say, tiresome. Is there hope? Is there a setting I've been unable to find? This seems to be the case with Excel 2003, 2007, and 2010.

    Read the article

  • EC2 Auto-Scaling with Spot and On-Demand Instances?

    - by platforms
    I'm looking to optimize the cost of our auto-scaling EC2 groups by having them launch spot instances instead of on-demand instances. What I really want is to be able to keep some servers in the group as on-demand instances, regardless of what happens to the spot instance pricing market. Then I want any additional servers in the group, above my configured minimum, to be spot instances. I'm generally OK with the delay in adding servers via spot requests. I can't seem to find any way to do this and I've tried to scour the AWS documentation. It appears that an ASG can either be on-demand or spot, but not a hybrid. I could possibly manually add an on-demand instance to the Elastic Load Balancer assigned to the auto-scaling group, but then the load of that server would not be factored into the auto-scaling measurements and triggers. I suppose I could enter a ridiculously high bid price in order to ensure that I always get the servers I need, but then I look at the pricing history and see occasional large spikes. The AWS documentation is at odds with itself, since in one place it says that if you enter a server minimum, that number is "ensured" to be there. But then when you read about spot instances, there are no assurances. The price differential for spot is compelling, so I'd like to leverage that as much as I can while still maintaining an always-on baseline. Is this possible?

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • Passive mode FTP file download hangs from specific machine

    - by chiptuned
    I have a server which is an AWS instance that just cannot download files from a specific FTP server. I can connect to the FTP server fine and run some commands, but when I request a file it just hangs. Here is the debug output of the base linux ftp client after login: ---> SYST 215 UNIX Type: Apache FtpServer Remote system type is UNIX. ftp> get outgoing/catalog.gz catalog.gz local: catalog.gz remote: outgoing/catalog.gz ---> PASV 227 Entering Passive Mode (64,156,167,125,135,191) ---> RETR outgoing/catalog.gz 150 File status okay; about to open data connection. Thats it. Then it just sits there and nothing transfers. I have verified that a data connection is made but the client gets no data. ? ss -nt dst 64.156.167.125 State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 10.185.147.150:41190 64.156.167.125:21 ESTAB 0 0 10.185.147.150:48871 64.156.167.125:48557 The FTP server is not in my control and downloads from other FTP servers in passive mode have worked. Active mode does not work as the system is behind a firewall. Every FTP client I've tried has the same problem. The download works from other systems, even from other AWS instances I have with the same Security Group. Not necessarily the same distro or config though. I understand it may be some issue on the server side, but I want to know what it is about my particular machine where the transfer hangs and where on every other machine I can get my hands on, it works. Please let me know what the culprit on the client side could be or ideas on what else to look at.

    Read the article

  • How to fix - Parse error: syntax error, unexpected T_CLASS

    - by user557015
    Hi, I'm new to PHP, and i'm making a forum. all the files work except one file, add_topic.php. It gives me an error saying: Parse error: syntax error, unexpected T_CLASS in /home/a3885465/public_html/add_topic.php on line 25 I know it is probably on the lines: </span>}<span class="s4"><br> </span>else{<span class="s4"><br> </span>echo "ERROR";<span class="s4"><br> </span>}<span class="s4"><br> </span>mysql_close();<span class="s4"><br> but the whole code is below just in case. If you have any idea, it would be really appreciated, thanks! The Code for add_topic.php <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <meta http-equiv="Content-Style-Type" content="text/css"> <title></title> <meta name="Generator" content="Cocoa HTML Writer"> <meta name="CocoaVersion" content="1038.32"> <style type="text/css"> p.p1 {margin: 0.0px 0.0px 12.0px 0.0px; font: 12.0px Verdana; color: #009901} p.p2 {margin: 0.0px 0.0px 12.0px 0.0px; font: 12.0px Verdana; color: #3a00ff} span.s1 {color: #3a00ff} span.s2 {font: 12.0px 'Lucida Grande'; color: #3a00ff} span.s3 {font: 13.0px Courier; color: #404040} span.s4 {font: 12.0px 'Lucida Grande'} span.s5 {color: #009901} td.td1 {width: 566.0px; margin: 0.5px 0.5px 0.5px 0.5px} </style> </head> <body> <table cellspacing="0" cellpadding="0"> <tbody> <tr> <td valign="middle" class="td1"> <p class="p1"><span class="s1"></span><?php<span class="s2"><br> </span><span class="s1">$host="</span><span class="s3">"host"</span><span class="s1">";</span> // Host name <span class="s4"><br> </span><span class="s1">$username="</span><span class="s3">username</span><span class="s1">";</span> // Mysql username <span class="s4"><br> </span><span class="s1">$password="password";</span> // Mysql password <span class="s4"><br> </span><span class="s1">$db_name="</span><span class="s3">database_name</span><span class="s1">";</span> // Database name <span class="s4"><br> </span><span class="s1">$tbl_name="forum_question";</span> // Table name</p> <p class="p2"><span class="s5">// Connect to server and select database.</span><span class="s4"><br> </span>mysql_connect("$host", "$username", "$password")or die("cannot connect"); <span class="s4"><br> </span>mysql_select_db("$db_name")or die("cannot select DB");</p> <p class="p2"><span class="s5">// get data that sent from form </span><span class="s4"><br> </span>$topic=$_POST['topic'];<span class="s4"><br> </span>$detail=$_POST['detail'];<span class="s4"><br> </span>$name=$_POST['name'];<span class="s4"><br> </span>$email=$_POST['email'];<span class="s4"><br> <br> </span>$datetime=date("d/m/y h:i:s"); <span class="s5">//create date time</span><span class="s4"><br> <br> </span>$sql="INSERT INTO $tbl_name(topic, detail, name, email, datetime)VALUES('$topic', '$detail', '$name', '$email', '$datetime')";<span class="s4"><br> </span>$result=mysql_query($sql);<span class="s4"><br> <br> </span>if($result){<span class="s4"><br> </span>echo "Successful<BR>";<span class="s4"><br> </span>echo "<a href=main_forum.php>View your topic</a>";<span class="s4"><br> </span>}<span class="s4"><br> </span>else{<span class="s4"><br> </span>echo "ERROR";<span class="s4"><br> </span>}<span class="s4"><br> </span>mysql_close();<span class="s4"><br> </span>?></p> </td> </tr> </tbody> </table> </body> </html> Thanks again!

    Read the article

  • Setting up a VPN connection to Amazon VPC - routing

    - by Keeno
    I am having some real issues setting up a VPN between out office and AWS VPC. The "tunnels" appear to be up, however I don't know if they are configured correctly. The device I am using is a Netgear VPN Firewall - FVS336GV2 If you see in the attached config downloaded from VPC (#3 Tunnel Interface Configuration), it gives me some "inside" addresses for the tunnel. When setting up the IPsec tunnels do I use the inside tunnel IP's (e.g. 169.254.254.2/30) or do I use my internal network subnet (10.1.1.0/24) I have tried both, when I tried the local network (10.1.1.x) the tracert stops at the router. When I tried with the "inside" ips, the tracert to the amazon VPC (10.0.0.x) goes out over the internet. this all leads me to the next question, for this router, how do I set up stage #4, the static next hop? What are these seemingly random "inside" addresses and where did amazon generate them from? 169.254.254.x seems odd? With a device like this, is the VPN behind the firewall? I have tweaked any IP addresses below so that they are not "real". I am fully aware, this is probably badly worded. Please if there is any further info/screenshots that will help, let me know. Amazon Web Services Virtual Private Cloud IPSec Tunnel #1 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Your Customer Gateway must be configured with a tunnel interface that is associated with the IPSec tunnel. All traffic transmitted to the tunnel interface is encrypted and transmitted to the Virtual Private Gateway. The Customer Gateway and Virtual Private Gateway each have two addresses that relate to this IPSec tunnel. Each contains an outside address, upon which encrypted traffic is exchanged. Each also contain an inside address associated with the tunnel interface. The Customer Gateway outside IP address was provided when the Customer Gateway was created. Changing the IP address requires the creation of a new Customer Gateway. The Customer Gateway inside IP address should be configured on your tunnel interface. Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.42 Inside IP Addresses - Customer Gateway : 169.254.254.2/30 - Virtual Private Gateway : 169.254.254.1/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: To route traffic between your internal network and your VPC, you will need a static route added to your router. Static Route Configuration Options: - Next hop : 169.254.254.1 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. IPSec Tunnel #2 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.46 Inside IP Addresses - Customer Gateway : 169.254.254.6/30 - Virtual Private Gateway : 169.254.254.5/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: Static Route Configuration Options: - Next hop : 169.254.254.5 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. EDIT #1 After writing this post, I continued to fiddle and something started to work, just not very reliably. The local IPs to use when setting up the tunnels where indeed my network subnets. Which further confuses me over what these "inside" IP addresses are for. The problem is, results are not consistent what so ever. I can "sometimes" ping, I can "sometimes" RDP using the VPN. Sometimes, Tunnel 1 or Tunnel 2 can be up or down. When I came back into work today, Tunnel 1 was down, so I deleted it and re-created it from scratch. Now I cant ping anything, but Amazon AND the router are telling me tunnel 1/2 are fine. I guess the router/vpn hardware I have just isnt up to the job..... EDIT #2 Now Tunnel 1 is up, Tunnel 2 is down (I didn't change any settings) and I can ping/rdp again. EDIT #3 Screenshot of route table that the router has built up. Current state (tunnel 1 still up and going string, 2 is still down and wont re-connect)

    Read the article

  • Using VBA / Macro to highlight changes in excel

    - by Zaj
    I have a spread sheet that I send out to various locations to have information on it updated and then sent back to me. However, I had to put validation and lock the cells to force users to input accurate information. Then I can to use VBA to disable the work around of cut copy and paste functions. And additionally I inserted a VBA function to force users to open the excel file in Macros. Now I'm trying to track the changes so that I know what was updated when I recieve the sheet back. However everytime i do this I get an error when someone savesthe document and randomly it will lock me out of the document completely. I have my code pasted below, can some one help me create code in the VBA forum to highlight changes instead of through excel's share/track changes option? ThisWorkbook (Code): Option Explicit Const WelcomePage = "Macros" Private Sub Workbook_BeforeClose(Cancel As Boolean) Call ToggleCutCopyAndPaste(True) 'Turn off events to prevent unwanted loops Application.EnableEvents = False 'Evaluate if workbook is saved and emulate default propmts With ThisWorkbook If Not .Saved Then Select Case MsgBox("Do you want to save the changes you made to '" & .Name & "'?", _ vbYesNoCancel + vbExclamation) Case Is = vbYes 'Call customized save routine Call CustomSave Case Is = vbNo 'Do not save Case Is = vbCancel 'Set up procedure to cancel close Cancel = True End Select End If 'If Cancel was clicked, turn events back on and cancel close, 'otherwise close the workbook without saving further changes If Not Cancel = True Then .Saved = True Application.EnableEvents = True .Close savechanges:=False Else Application.EnableEvents = True End If End With End Sub Private Sub Workbook_BeforeSave(ByVal SaveAsUI As Boolean, Cancel As Boolean) 'Turn off events to prevent unwanted loops Application.EnableEvents = False 'Call customized save routine and set workbook's saved property to true '(To cancel regular saving) Call CustomSave(SaveAsUI) Cancel = True 'Turn events back on an set saved property to true Application.EnableEvents = True ThisWorkbook.Saved = True End Sub Private Sub Workbook_Open() Call ToggleCutCopyAndPaste(False) 'Unhide all worksheets Application.ScreenUpdating = False Call ShowAllSheets Application.ScreenUpdating = True End Sub Private Sub CustomSave(Optional SaveAs As Boolean) Dim ws As Worksheet, aWs As Worksheet, newFname As String 'Turn off screen flashing Application.ScreenUpdating = False 'Record active worksheet Set aWs = ActiveSheet 'Hide all sheets Call HideAllSheets 'Save workbook directly or prompt for saveas filename If SaveAs = True Then newFname = Application.GetSaveAsFilename( _ fileFilter:="Excel Files (*.xls), *.xls") If Not newFname = "False" Then ThisWorkbook.SaveAs newFname Else ThisWorkbook.Save End If 'Restore file to where user was Call ShowAllSheets aWs.Activate 'Restore screen updates Application.ScreenUpdating = True End Sub Private Sub HideAllSheets() 'Hide all worksheets except the macro welcome page Dim ws As Worksheet Worksheets(WelcomePage).Visible = xlSheetVisible For Each ws In ThisWorkbook.Worksheets If Not ws.Name = WelcomePage Then ws.Visible = xlSheetVeryHidden Next ws Worksheets(WelcomePage).Activate End Sub Private Sub ShowAllSheets() 'Show all worksheets except the macro welcome page Dim ws As Worksheet For Each ws In ThisWorkbook.Worksheets If Not ws.Name = WelcomePage Then ws.Visible = xlSheetVisible Next ws Worksheets(WelcomePage).Visible = xlSheetVeryHidden End Sub Private Sub Workbook_Activate() Call ToggleCutCopyAndPaste(False) End Sub Private Sub Workbook_Deactivate() Call ToggleCutCopyAndPaste(True) End Sub This is in my ModuleCode: Option Explicit Sub ToggleCutCopyAndPaste(Allow As Boolean) 'Activate/deactivate cut, copy, paste and pastespecial menu items Call EnableMenuItem(21, Allow) ' cut Call EnableMenuItem(19, Allow) ' copy Call EnableMenuItem(22, Allow) ' paste Call EnableMenuItem(755, Allow) ' pastespecial 'Activate/deactivate drag and drop ability Application.CellDragAndDrop = Allow 'Activate/deactivate cut, copy, paste and pastespecial shortcut keys With Application Select Case Allow Case Is = False .OnKey "^c", "CutCopyPasteDisabled" .OnKey "^v", "CutCopyPasteDisabled" .OnKey "^x", "CutCopyPasteDisabled" .OnKey "+{DEL}", "CutCopyPasteDisabled" .OnKey "^{INSERT}", "CutCopyPasteDisabled" Case Is = True .OnKey "^c" .OnKey "^v" .OnKey "^x" .OnKey "+{DEL}" .OnKey "^{INSERT}" End Select End With End Sub Sub EnableMenuItem(ctlId As Integer, Enabled As Boolean) 'Activate/Deactivate specific menu item Dim cBar As CommandBar Dim cBarCtrl As CommandBarControl For Each cBar In Application.CommandBars If cBar.Name <> "Clipboard" Then Set cBarCtrl = cBar.FindControl(ID:=ctlId, recursive:=True) If Not cBarCtrl Is Nothing Then cBarCtrl.Enabled = Enabled End If Next End Sub Sub CutCopyPasteDisabled() 'Inform user that the functions have been disabled MsgBox " Cutting, copying and pasting have been disabled in this workbook. Please hard key in data. " End Sub

    Read the article

  • Today's Links (6/29/2011)

    - by Bob Rhubart
    Event-Driven SOA: Events meet Services | Guido Schmutz Oracle ACE Director Guido Schmutz shows you how to achieve extreme loose coupling within a Service-Oriented Architecture by using event-driven interactions. Misconceptions About Software Architecture | Sanjeev Kumar A concise, to-the-point, and informative article by Sanjeev Kumar. Good Leaders Acknowledge What Can't Be Done - Jeffrey Pfeffer - Harvard Business Review "None of us likes to admit to bad decisions," says Jeffrey Pfeffer. "But imagine how much harder that is for someone who has been chosen to lead a large organization precisely because he or she is thought to have the power to see the future more clearly and chart a wise course." Suboptimal Thinking within Enterprise Architecture | James McGovern McGovern says: "We need to remember that enterprises live and thrive beyond just the current person at the helm." Boundaryless Information Flow | Richard Veryard "If all the boundaries are removed or porous, then the (extended) enterprise or ecosystem becomes like a giant sponge, in which all information permeates the whole," Veryard says. "Some people may think that's a good idea, but it's not what I'd call loose coupling." Coming to a City Near You: Oracle Business Analytics Summits | Rob Reynolds This series of events includes a Technology and Architecture track. New Date for Implementation of Sun Hands-On Course Requirement (Oracle Certification) As announced on the Oracle Certification website, Java Architect, Java Developer, Solaris System Administrator and Solaris Security Administrator certification tracks will include a new mandatory course attendance requirement. VirtualBox 4.0.10 is now available for download | Bob Netherton Netherton shares information on the new release. Updated Technical Best Practices whitepaper | Anthony Shorten The Technical Best Practices whitepaper has been updated with the latest advice. "New advice includes new installation advice, advanced settings, new security settings and advice for both Oracle WebLogic and IBM WebSphere installations," says Shorten. Kscope 11 ADF, AIA and Business Rules | Peter Paul van de Beek Whitehorses Solution Architect Peter Paul van de Beek shares his impressions of KScope11 presentations by Markus Eisele, Sten Vesterli, and Edwin Biemond. Amazon AWS for the learning experience | Andrej Koelewijn "Using AWS changes your expectations how your internal data center should operate," says Koelewijn. BPMN is dead, long live BPEL! (SOA Partner Community Blog) Jürgen Kress shares information -- including a long list of speakers -- for the SOA & BPM Integration Days 2011 conference, October 12th & 13th 2011 in Düsseldorf. InfoQ: HTML5 and the Dawn of Rich Mobile Web Applications James Pearce introduces cross-platform web apps development using HTML5 and web frameworks, such as jQTouch, jQuery Mobile, Sencha Touch, PhoneGap, outlining what makes a good framework. InfoQ: Interview and Book Excerpt: CMMI for Development "Frameworks like TOGAF are used to define an architecture that aligns IT assets and resources to support key business needs and processes of key stakeholders," says SEI's Mike Konrad. "But the individual application systems, capabilities, services, networks, and other IT assets and infrastructure still need to be acquired, developed, or sustained." InfoQ: Architecting a Cloud-Scale Identity Fabric | Eric Olden "The most cited reason for not moving to the cloud is concern about security," says Olden. "In particular, managing user identity and access in the cloud is a tough problem to solve and a big security concern for organizations."

    Read the article

  • Silverlight Cream for May 13, 2010 -- #861

    - by Dave Campbell
    In this Issue: Sigurd Snørteland, Jeff Prosise, DaveDev, Joe Zhou, Chris Eargle, John Papa(-2-, -3-), and David Anson(-2-). Shoutouts: In with the links I've listed below, Sigurd Snørteland also sent a link to this app he's working on which is actually pretty cool to see: ZuneLight. The code is not yet available. He also has a no-code demo of a Silverlight Media Center Pieter Voloshyn, Luiz Thadeu, and Jhun Iti have a very nice Silverlight image editor up: Thumba From SilverlightCream.com: WP7 - Silverlight on mobile Sigurd Snørteland submitted some links for me that have been translated to English from his blog. I hope the pages come out good because he's got a lot of good stuff on there. This one has a link to a presentation he did, and 4 projects you can load up in the emulator that he's converted to the phone: weather, worldclock, coverflow, and solitaire ... pretty cool... thanks for the links Sigurd! Understanding Page Orientation in Silverlight for Windows Phone Jeff Prosise has a really nice post up on page orientation in WP7 ... what it means to your app, how to detect it, and example code for what to do then... also love a quote by Jeff: "Silverlight for Windows Phone is the hottest thing since color TV" Why you should check out Expression Blend Behaviors when using Silverlight DaveDev has a post up describing Behaviors and why we should use them, plus tons of external links to resources, blogs, videos... all good stuff... Fiddler inspector for WCF Silverlight Polling Duplex and WCF RIA Joe Zhou announces and provides a link to a new Fiddler inspector that understands the framing in Polling Duplex and also raw binary xml and binary SOAP. Windows Phone Controls v0.7 Chris Eargle reports the release of Version 0.7 of the Windows Phone Controls project on CodePlex ... this includes a Pivot Control and a Panorama Control... both very nicely done. Binding to Silverlight ComboBox and Using SelectedValue, SelectedValuePath and DisplayMemberPath John Papa responds to a user question and put up a nice post about binding to a ComboBox and then go from the selected item to some other property ... code included No More Boxes! Exploring the PathListBox (Silverlight TV #25) Silverlight TV 25 went up on Tuesday ... thought it was going to be Thursday?? anyway ... John Papa and Adam Kinney are discussing the PathListBox and looking at some cool demos thereof. Exposing SOAP, OData, and JSON Endpoints for RIA Services (Silverlight TV 26) Since today IS Thursday, we have a new Silverlight TV, number 26, and John Papa is chatting with Deepesh Mohnani of the WCF RIA Services team about exposing all sorts of endpoints... should be something in there for everybody :) Workaround for a Silverlight data binding bug affecting various scenarios - including DataGrid+ContextMenu David Anson details the rabbit-trail he and others on the team followed in response to a problem reported via Twitter where the binding on a DataGrid seemed off by a row(!) ... weird but true, validated, and SL3/4 are bug-for-bug compatible with this too! ... But David wouldn't leave us there.. he also has a workaround. Sharing the code for a simple Silverlight 4 REST-based cloud-oriented file management app for Azure and S3 David Anson had an opportunity to build an app he's wanted to build for a while and shares it with us: Blobstore -- a small, lightweight Silverlight 4 application that acts as a basic front-end for the Windows Azure Simple Data Storage and the Amazon Simple Storage Service (S3) -- and remember I said he shared the source :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Spring Security RememberMe Services with Session Cookie

    - by Jarrod
    I am using Spring Security's RememberMe Services to keep a user authenticated. I would like to find a simple way to have the RememberMe cookie set as a session cookie rather than with a fixed expiration time. For my application, the cookie should persist until the user closes the browser. Any suggestions on how to best implement this? Any concerns on this being a potential security problem? The primary reason for doing so is that with a cookie-based token, any of the servers behind our load balancer can service a protected request without relying on the user's Authentication to be stored in an HttpSession. In fact, I have explicitly told Spring Security to never create sessions using the namespace. Further, we are using Amazon's Elastic Load Balancing, and so sticky sessions are not supported. NB: Although I am aware that as of Apr. 08, Amazon now supports sticky sessions, I still do not want to use them for a handful of other reasons. Namely that the untimely demise of one server would still cause the loss of sessions for all users associated with it. http://aws.amazon.com/about-aws/whats-new/2010/04/08/support-for-session-stickiness-in-elastic-load-balancing/

    Read the article

  • vestal_versions : problem with column named changes

    - by arkannia
    Hi, I am working with vestal version for 2 months. Everything was fine until this afternoon. I didn't done anything special(or i don't remembered...) but the code works fine on others computers... The problem is that i'm not able to save my model anymore: rails give me this error : ActiveRecord::DangerousAttributeError: changes is defined by ActiveRecord changes field is by default an activerecord method. With the console, the message is the next : ActiveRecord::DangerousAttributeError: changes is defined by ActiveRecord Here are my local gem files: abstract (1.0.0) actionmailer (3.0.0.beta3) actionpack (3.0.0.beta3) activemodel (3.0.0.beta3) activerecord (3.0.0.beta3) activeresource (3.0.0.beta3) activesupport (3.0.0.beta3) arel (0.3.3) builder (2.1.2) bundler (0.9.25, 0.9.24) crack (0.1.7) erubis (2.6.5) god (0.9.0) haml (3.0.1, 2.2.23) i18n (0.3.7) mail (2.2.0) memcache-client (1.8.3) memcached (0.17.7) mime-types (1.16) polyglot (0.3.1) rack (1.1.0) rack-mount (0.6.3) rack-test (0.5.3) rails (3.0.0.beta3) railties (3.0.0.beta3) rake (0.8.7) savon (0.7.8, 0.7.6) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.6, 0.13.4) treetop (1.4.5) tzinfo (0.3.20) And here my Gemfile source 'http://gemcutter.org' gem "rails", "3.0.0.beta3" gem "will_paginate", "3.0.pre" #gem 'nokogiri' #gem 'curb' #gem 'handsoap' gem 'savon' gem 'mysql' gem 'haml', '2.2.23' #gem 'haml', '3.0.1' gem 'hpricot' gem 'i18n', '> 0.3.5' gem 'i18n_routing' gem 'i18n_auto_scoping' gem 'handler301', :git => 'http://github.com/kwi/handler301.git' gem 'seo_meta_builder' gem 'vestal_versions' #gem 'paperclip', :git => 'git://github.com/thoughtbot/paperclip.git', :branch => 'rails3' ## Bundle edge rails: gem "rails", :git => "git://github.com/rails/rails.git" ## Bundle the gems you use: # gem "bj" # gem "hpricot", "0.6" # gem "sqlite3-ruby", :require => "sqlite3" # gem "aws-s3", :require => "aws/s3" ## Bundle gems used only in certain environments: # gem "rspec", :group => :test # group :test do # gem "webrat" # end If you have any suggestions to solve this issue, i'll be glad to hear them ! Thanks

    Read the article

  • Lua Operator Overloading

    - by Pessimist
    I've found some places on the web saying that operators in Lua are overloadable but I can't seem to find any example. Can someone provide an example of, say, overloading the + operator to work like the .. operator works for string concatenation? EDIT 1: to Alexander Gladysh and RBerteig: If operator overloading only works when both operands are the same type and changing this behavior wouldn't be easy, then how come the following code works? (I don't mean any offense, I just started learning this language): printf = function(fmt, ...) io.write(string.format(fmt, ...)) end Set = {} Set.mt = {} -- metatable for sets function Set.new (t) local set = {} setmetatable(set, Set.mt) for _, l in ipairs(t) do set[l] = true end return set end function Set.union (a,b) -- THIS IS THE PART THAT MANAGES OPERATOR OVERLOADING WITH OPERANDS OF DIFFERENT TYPES -- if user built new set using: new_set = some_set + some_number if type(a) == "table" and type(b) == "number" then print("building set...") local mixedset = Set.new{} for k in pairs(a) do mixedset[k] = true end mixedset[b] = true return mixedset -- elseif user built new set using: new_set = some_number + some_set elseif type(b) == "table" and type(a) == "number" then print("building set...") local mixedset = Set.new{} for k in pairs(b) do mixedset[k] = true end mixedset[a] = true return mixedset end if getmetatable(a) ~= Set.mt or getmetatable(b) ~= Set.mt then error("attempt to 'add' a set with a non-set value that is also not a number", 2) end local res = Set.new{} for k in pairs(a) do res[k] = true end for k in pairs(b) do res[k] = true end return res end function Set.tostring (set) local s = "{" local sep = "" for e in pairs(set) do s = s .. sep .. e sep = ", " end return s .. "}" end function Set.print (s) print(Set.tostring(s)) end s1 = Set.new{10, 20, 30, 50} s2 = Set.new{30, 1} Set.mt.__add = Set.union -- now try to make a new set by unioning a set plus a number: s3 = s1 + 8 Set.print(s3) --> {1, 10, 20, 30, 50}

    Read the article

  • Looping through a SimpleXML object, or turning the whole thing into an array.

    - by Coffee Cup
    I'm trying to work out how to iterate though a returned SimpleXML object. I'm using a toolkit called Tarzan AWS, which connects to Amazon Web Services (SimpleDB, S3, EC2, etc). I'm specifically using SimpleDB. I can put data into the Amazon SimpleDB service, and I can get it back. I just don't know how to handle the SimpleXML object that is returned. The Tarzan AWS documentation says this: Look at the response to navigate through the headers and body of the response. Note that this is an object, not an array, and that the body is a SimpleXML object. Here's a sample of the returned SimpleXML object: [body] = SimpleXMLElement Object ( [QueryWithAttributesResult] = SimpleXMLElement Object ( [Item] = Array ( [0] = SimpleXMLElement Object ( [Name] = message12413344443260 [Attribute] = Array ( [0] = SimpleXMLElement Object ( [Name] = active [Value] = 1 ) [1] = SimpleXMLElement Object ( [Name] = user [Value] = john ) [2] = SimpleXMLElement Object ( [Name] = message [Value] = This is a message. ) [3] = SimpleXMLElement Object ( [Name] = time [Value] = 1241334444 ) [4] = SimpleXMLElement Object ( [Name] = id [Value] = 12413344443260 ) [5] = SimpleXMLElement Object ( [Name] = ip [Value] = 10.10.10.1 ) ) ) [1] = SimpleXMLElement Object ( [Name] = message12413346907303 [Attribute] = Array ( [0] = SimpleXMLElement Object ( [Name] = active [Value] = 1 ) [1] = SimpleXMLElement Object ( [Name] = user [Value] = fred ) [2] = SimpleXMLElement Object ( [Name] = message [Value] = This is another message ) [3] = SimpleXMLElement Object ( [Name] = time [Value] = 1241334690 ) [4] = SimpleXMLElement Object ( [Name] = id [Value] = 12413346907303 ) [5] = SimpleXMLElement Object ( [Name] = ip [Value] = 10.10.10.2 ) ) ) ) So what code do I need to get through each of the object items? I'd like to loop through each of them and handle it like a returned mySQL query. For example, I can query SimpleDB and then loop though the SimpleXML so I can display the results on the page. Alternatively, how do you turn the whole shebang into an array? I'm new to SimpleXML, so I apologise if my questions aren't specific enough.

    Read the article

  • Merging Passed Parameters

    - by Josh Crowder
    I have a two data arrays sent in from a form, one called transloaded and the other video which is the actual form for the model. I need to get [:video_encoded][:url] and save that to [:video][:flash_url] This is the passed arguments or transloaded, when I try and access [:transload][:results][:video_encode] I get nil. print params[:transload] { "assembly_id":"d59b4293b3d79d2ccd1948c02421c6a6", "status":"success", "uploads":{ "video":{ "name":"bbc_one.mp4", "mime":"video/mp4", "ext":"mp4", "size":601104, "meta":{ "width":720, "height":404, "video_fps":25, "video_bitrate":null, "video_format":"avc1", "video_codec":"ffh264", "audio_bitrate":"128k", "audio_codec":"faad", "duration":3.07, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://tmp.transloadit.com/" } }, "results":{ "video_encode":{ "name":"bbc_one.flv", "mime":"video/x-flv", "steps":["encode","export"], "ext":"flv", "size":388317, "meta":{ "width":480, "height":320, "video_fps":25, "video_bitrate":"512k", "video_format":"FLV1", "video_codec":"ffflv", "audio_bitrate":"64k", "audio_codec":"mp3", "duration":3.11, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://s3.transloadit.com/b7deac9c96af6c745e914e25d0350baa/7a/2b09e822265ac2328789b40dcc02ae/bbc_one.flv" }, "video_encode_iphone":{ "name":"bbc_one.qt", "mime":"video/quicktime", "steps":["encode_iphone","export"], "ext":"qt", "size":218236, "meta":{ "width":480, "height":320, "video_fps":25, "video_bitrate":null, "video_format":"avc1", "video_codec":"ffh264", "audio_bitrate":"128k", "audio_codec":"faad", "duration":3.04, "device_vendor":null, "device_name":null, "device_software":null, "latitude":null, "longitude":null }, "url":"http://s3.transloadit.com/31/58bcc80d5345e52a42c9773125e8f0/bbc_one.qt" } } } Here is what I am trying to use video_links = { :flash_url => params[:transload][:results][:video_encode][:url], :mp4_url => params[:transload][:results][:video_encode_iphone][:url] } params[:video].merge(video_links)

    Read the article

  • HTTPSConnection module missing in Python 2.6 on CentOS 5.2

    - by d2kagw
    Hi guys, I'm playing around with a Python application on CentOS 5.2. It uses the Boto module to communicate with Amazon Web Services, which requires communication through a HTTPS connection. When I try running my application I get an error regarding HTTPSConnection being missing: "AttributeError: 'module' object has no attribute 'HTTPSConnection'" Google doesn't really return anything relevant, I've tried most of the solutions but none of them solve the problem. Has anyone come across anything like it? Here's the traceback: Traceback (most recent call last): File "./chatter.py", line 114, in <module> sys.exit(main()) File "./chatter.py", line 92, in main chatter.status( ) File "/mnt/application/chatter/__init__.py", line 161, in status cQueue.connect() File "/mnt/application/chatter/tools.py", line 42, in connect self.connection = SQSConnection(cConfig.get("AWS", "KeyId"), cConfig.get("AWS", "AccessKey")); File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/sqs/connection.py", line 54, in __init__ self.region.endpoint, debug, https_connection_factory) File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/connection.py", line 418, in __init__ debug, https_connection_factory) File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/connection.py", line 189, in __init__ self.refresh_http_connection(self.server, self.is_secure) File "/usr/local/lib/python2.6/site-packages/boto-1.7a-py2.6.egg/boto/connection.py", line 247, in refresh_http_connection connection = httplib.HTTPSConnection(host) AttributeError: 'module' object has no attribute 'HTTPSConnection'

    Read the article

  • Rate limiting a ruby file stream

    - by Matthew Savage
    I am working on a project which involves uploading flash video files to a S3 bucket from a number of geographically distributed nodes. The video files are about 2-3mb each, and we are only sending one file (per node) every ten minutes, however the bandwidth we consume needs to be rate limited to ~20k/s, as these nodes are delivering streaming media to a CDN, and due to the locations we are only able to get 512k max upload. I have been looking into the ASW-S3 gem and while it doesn't offer any kind of rate limiting I am aware that you can pass in a IO Stream. Given this I am wondering if it might be possible to create a rate-limited stream which overrides the read method, adds in the rate limiting logic (e.g. in its simplest form a call to sleep between reads) and then call out to the super of the overridden method. Another option I considered is hacking the code for Net::HTTP and putting the rate limiting into the send_request_with_body_stream method which is using a while loop, but I'm not entirely sure which would be the best option. I have attempted at extending the IO class, however that didn't work at all, simply inheriting from the class with class ThrottledIO < IO didn't do anything. Any suggestions will be greatly appreciated.

    Read the article

  • propositional theorems

    - by gcc
    wang's theorem to prove theorems in propositonal calculus Label Sequent Comment S1: P ? Q, Q ? R, ¬R ? ¬P Initial sequent. S2: ¬P ? Q, ¬Q ? R, ¬R ? ¬P Two applications of R5. S3: ¬P ? Q, ¬Q ? R ? ¬P, R Rl. S4: ¬P, ¬Q ? R ? ¬P, R S4 and S5 are obtained from S3 with R3. Note that S4 is an axiom since P appears on both sides of the sequent ar- row at the top level. S5: Q, ¬Q ? R ? ¬P, R The other sequent generated by the ap- plication of R3. S6: Q, ¬Q ? ¬P, R S6 and S7 are obtained from S5 using R3. S7: Q, R ? ¬P, R This is an axiom. S8: Q ? ¬P, R, Q Obtained from S6 using R1. S8 is an axiom. The original sequent is now proved, since it has successfully been transformed into a set of three axioms with no unproved sequents left over. //R1,R2,R3,R4,R5 is transformation myhomework is so complicate as you can see and i must finish homework in 4 days i just want help ,can you show me how i should construct the algorithm or how i should take and store the input

    Read the article

  • RESTful copy/move operations?

    - by ladenedge
    I am trying to design a RESTful filesystem-like service, and copy/move operations are causing me some trouble. First of all, uploading a new file is done using a PUT to the file's ultimate URL: PUT /folders/42/contents/<name> The question is, what if the new file already resides on the system under a different URL? Copy/move Idea 1: PUTs with custom headers. This is similar to S3's copy. A PUT that looks the same as the upload, but with a custom header: PUT /folders/42/contents/<name> X-custom-source: /files/5 This is nice because it's easy to change the file's name at copy/move time. However, S3 doesn't offer a move operation, perhaps because a move using this scheme won't be idempotent. Copy/move Idea 2: POST to parent folder. This is similar to the Google Docs copy. A POST to the destination folder with XML content describing the source file: POST /folders/42/contents ... <source>/files/5</source> <newName>foo</newName> I might be able to POST to the file's new URL to change its name..? Otherwise I'm stuck with specifying a new name in the XML content, which amplifies the RPCness of this idea. It's also not as consistent with the upload operation as idea 1. Ultimately I'm looking for something that's easy to use and understand, so in addition to criticism of the above, new ideas are certainly welcome!

    Read the article

  • Loading an FLV in Facebox with jQuery for IE7 and IE8

    - by Trip
    It goes almost without saying, this works perfectly in Chrome, Firefox, and Safari. IE (any version) being the problem. Objective: I am trying to load JWplayer which loads an FLV from S3 in a Facebox popup. jQuery(document).ready(function($) { $('a[rel*=facebox]').facebox() }) HTML (haml): %li#videoGirl = link_to 'What is HQchannel?', '#player', :rel => 'facebox' .grid_8.omega.alpha#player{:style => 'display: none;'} :javascript var so = new SWFObject('/flash/playerTrans.swf','mpl','640px','360px','0'); so.addParam('allowscriptaccess','always'); so.addParam('allowfullscreen','true'); so.addParam('wmode','transparent'); so.addVariable('file', 'http://hometownquarterlyvideos.s3.amazonaws.com/whatishqchannel.flv&autostart=true&controlbar=none&repeat=always&image=/flash/video_girl/whatishqchannel.jpg&icons=false&screencolor=none&backcolor=FFFFFF&screenalpha=0&overstretch'); so.addVariable('overstretch', 'true') so.write('player'); Problem: Despite the video being set to display: none;. It begins playing anyway. When clicking on the activation div, IE7 pops up a wrong sized blank div with a nav (params are set to not show nav and scrubber), and no buttons on the nav and srubber work. IE8 shows the right size but same behavior with nav and scrubber not working, and blank screen. My guess: I'm thinking that the problem is with the javascript not being called at the right times. It seems it's loading the facebox without the jwplayer. At least I assume. Hence the reason why the nav is there. I thinking that it did not read the javascript for that.

    Read the article

  • Assign sage variable values into R objects via sagetex and Sweave

    - by sheed03
    I am writing a short Sweave document that outputs into a Beamer presentation, in which I am using the sagetex package to solve an equation for two parameters in the beta binomial distribution, and I need to assign the parameter values into the R session so I can do additional processing on those values. The following code excerpt shows how I am interacting with sage: <<echo=false,results=hide>>= mean.raw <- c(5, 3.5, 2) theta <- 0.5 var.raw <- mean.raw + ((mean.raw^2)/theta) @ \begin{frame}[fragile] \frametitle{Test of Sage 2} \begin{sagesilent} var('a1, b1, a2, b2, a3, b3') eqn1 = [1000*a1/(a1+b1)==\Sexpr{mean.raw[1]}, ((1000*a1*b1)*(1000+a1+b1))/((a1+b1)^2*(a1+b1+1))==\Sexpr{var.raw[1]}] eqn2 = [1000*a2/(a2+b2)==\Sexpr{mean.raw[2]}, ((1000*a2*b2)*(1000+a2+b2))/((a2+b2)^2*(a2+b2+1))==\Sexpr{var.raw[2]}] eqn3 = [1000*a3/(a3+b3)==\Sexpr{mean.raw[3]}, ((1000*a3*b3)*(1000+a3+b3))/((a3+b3)^2*(a3+b3+1))==\Sexpr{var.raw[3]}] s1 = solve(eqn1, a1,b1) s2 = solve(eqn2, a2,b2) s3 = solve(eqn3, a3,b3) \end{sagesilent} Solutions of Beta Binomial Parameters: \begin{itemize} \item $\sage{s1[0]}$ \item $\sage{s2[0]}$ \item $\sage{s3[0]}$ \end{itemize} \end{frame} Everything compiles just fine, and in that slide I am able to see the solutions to the three equations respective parameters in that itemized list (for example the first item in the itemized list from that beamer slide is outputted as [a1=(328/667), b1=(65272/667)] (I am not able to post an image of the beamer slide but I hope you get the idea). I would like to save the parameter values a1,b1,a2,b2,a3,b3 into R objects so that I can use them in simulations. I cannot find any documentation in the sagetex package on how to save output from sage commands into variables for use with other programs (in this case R). Any suggestions on how to get these values into R?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >