Search Results

Search found 6393 results on 256 pages for 'frame rate'.

Page 87/256 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Wired connection problem through router

    - by tommypincha
    I'm having trouble to connect my Ubuntu 11.10 to internet through ethernet. I installed a router to get Wi-Fi and now (by a wired connection) I can't have internet with ubuntu (but I can with Windows 7). I see several attempts per minute of the network-manager to get a connection, but after a minute it stops trying. Here are a couple of outputs from key files: cat /etc/network/interfaces auto lo iface lo inet loopback and ifconfig -a eth0 Link encap:Ethernet HWaddr 00:16:76:e4:a6:e8 inet6 addr: fe80::216:76ff:fee4:a6e8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:117 dropped:0 overruns:0 frame:117 TX packets:50 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:12221 (12.2 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:582 errors:0 dropped:0 overruns:0 frame:0 TX packets:582 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:46024 (46.0 KB) TX bytes:46024 (46.0 KB) I tried reconnecting the modem and the router and reconnecting the ethernet cable but nothing... I tried other solutions from other posts (this one has a similar issue Wired connection not working) but again nothing. My IP is dynamic. A couple of things I see and did: I see no inet addr, only inet6. I ignored ipv6 from the internet connections, and restarted the network-manager service and nothing. A difference with the post I mentioned is the RX packets with errors I have, is this a clue of the problem? Any help would be appreciated, Thanks!

    Read the article

  • How to use mount points in MilkShape models?

    - by vividos
    I have bought the Warriors & Commoners model pack from Frogames and the pack contains (among other formats) two animated models and several non-animated objects (axe, shield, pilosities, etc.) in MilkShape3D format. I looked at the official "MilkShape 3D Viewer v2.0" (msViewer2.zip at http://www.chumba.ch/chumbalum-soft/ms3d/download.html) source code and implemented loading the model, calculating the joint matrices and everything looks fine. In the model there are several joints that are designated as the "mount points" for the static objects like axe and shield. I now want to "put" the axe into the hand of the animated model, and I couldn't quite figure out how. I put the animated vertices in a VBO that gets updated every frame (I know I should do this with a shader, but I didn't have time to do this yet). I put the static vertices in another VBO that I want to keep static and not updated every frame. I now tried to render the animated vertices first, then use the joint matrix for the "mount joint" to calculate the location of the static object. I tried many things, and what about seems to be right is to transpose the joint matrix, then use glMatrixMult() to transform the modelview matrix. For some objects like the axe this is working, but not for others, e.g. the pilosities. Now my question: How is this generally implemented when using bone/joint models, and especially with MilkShape3D models? Am I on the right track?

    Read the article

  • Internet slow on one router only [the problem only in Ubuntu] [on hold]

    - by mrSuperEvening
    Internet works perfectly on every other router, but browsing sucks at home (slow browsing and slow loading times). I changed DNS servers to 8.8.0.0, still doesn't help. And funnily, download speed is extremely high on this network (meaning torrents for example), but using browsers and loading websites is extremely slow (only on this network). Do I need to change something in router settings or what can I try? By the way, I use wired connection to router. EDIT: There's no problems when using Windows. EDIT: ifconfig: eth0 Link encap:Ethernet HWaddr f2:4d:a0:c0:3f:4c inet addr:192.168.11.8 Bcast:192.168.11.255 Mask:255.255.255.0 inet6 addr: fe80::f24d:a2ff:fec6:3f4c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:206798 errors:0 dropped:0 overruns:0 frame:0 TX packets:219570 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76680734 (76.6 MB) TX bytes:21738160 (21.7 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:160 errors:0 dropped:0 overruns:0 frame:0 TX packets:160 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11094 (11.0 KB) TX bytes:11094 (11.0 KB)` ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (213.159.32.147) 56(84) bytes of data. 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=1 ttl=61 time=0.936 ms 64 bytes from lan-213-159-32-147.kns.skynet.lv (213.159.32.147): icmp_seq=2 ttl=61 time=0.937 ms --- google.com ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.936/0.936/0.937/0.030 ms

    Read the article

  • Estimate angle to launch missile, maths question

    - by Jonathan
    I've been working on this for an hour or two now and my maths really isn't my strong suit which is definitely not a good thing for a game programmer but that shouldn't stop me enjoying a hobby surely? After a few failed attempts I was hoping someone else out there could help so here's the situation. I'm trying to implement a bit of faked intelligence when the A.I fires it's missiles at a target in a 2D game world. By predicting the likely position the target will be in given it's current velocity and the time it will take the missile to reach it's target. I created an image to demonstrate my thinking: http://i.imgur.com/SFmU3.png which also contains the logic I use for accelerating the missile after launch. The ship that fires the missile can fire within a total of 40 degree angle, 20 either side of itself, but this could likely become variable. My current attempt was to break the space between the two lines into segments which match the targets width. Then calculate the time it would take the missile to get to that location using the formula. So for each iteration of this we total up the values and that tells us the distance travelled, ad it would then just need compared to distance to the segment. startVelocity * ((startVelocity * acceleration)^(currentframe-1) So for example. If we start at a velocity of 1f/frame with an acceleration of 0.1f the formula, at frame 4, would be 1 * (1.1^3) = 1.331 But I quickly realized I was getting lost when trying to put this into practice. Does this seem like a correct starting point or am I going completely the wrong way about it? Any pointers would help me greatly. Maths really isn't my strong suit so I get easily lost in these matters and don't even really know a good phrase to search for with this. So I guess in summary my question is more about the correct way to approach this problem and any additional code samples on top of that would be great but I'm not averse to working out the complete code from helpful pointers.

    Read the article

  • Implementing AS 2.0 with FSM?

    - by Up2u
    i have seen many of references of AI and FSM like : http://www.richardlord.net/blog/fini...n-actionscript and sadly im still can't understand the point of the FSM on AS2.0 is it a must to create a class of each state ? i have a project of game and also it has an AI, the AI has 3 state n i said the state is distanceCheck, ChaseTarget, and Hit the target, the game that i create is FPS game and play via by mouse so what i mean is i have create an AI ( and is success ) but i want to convert it to FSM method ... i create : function of CheckDistanceState() and in that function i have to locked the target with an array, and sort it with the nearest distance and locked it and it trigger the function ChaseState(), and in the ChaseState() i insert the Hit() function to destroy the enemy, the 3 function that i created , i call it in the AI_cursor.onEnterframe, ( FPS game that only have a cursor in stage ) is there any chance to implement FSM to my code without to create a class ?? from what i read before , to create a class mean to create an external code outside of the frame ( i used to code in frame) and i stil dont understand about it. sorry if my explaination not clear ...

    Read the article

  • Entiity System with C++

    - by Dono
    I'm working on a game engine using the Entity System and I have some questions. How i see Entity System : Components : A class with attributs, set and get. Sprite Physicbody SpaceShip ... System : A class with a list of components. (Component logic) EntityManager Renderer Input Camera ... Entity : Just a empty class with a list of components. What i've done : Currently, i've got a program who allow me to do that : // Create a new entity/ Entity* entity = game.createEntity(); // Add some components. entity->addComponent( new TransformableComponent() ) ->setPosition( 15, 50 ) ->setRotation( 90 ) ->addComponent( new PhysicComponent() ) ->setMass( 70 ) ->addComponent( new SpriteComponent() ) ->setTexture( "name.png" ) ->addToSystem( new RendererSystem() ); My questions Did the system stock a list of components or a list of entities ? In the case where I stock a list of entities, I need to get the component of this entities on each frame, that's probably heavy isn't it ? Did the system stock a list of components or a list of entities ? In the case where I stock a list of entities, I need to get the component of this entities on each frame, that's probably heavy isn't it ?

    Read the article

  • Wired Connection Problem

    - by Dave
    After upgrading to 12.04 my interent connection no longer works. More precisely it is really, really, slow, and occasionally will connect, but do so only for a few moments and then disappear again. I am on a Lenovo Workstation e20. Output of ifconfig: eth0 Link encap:Ethernet HWaddr 70:f3:95:00:64:3e inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::72f3:95ff:fe00:643e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7398 errors:0 dropped:74 overruns:0 frame:0 TX packets:6684 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5407828 (5.4 MB) TX bytes:854343 (854.3 KB) Interrupt:20 Memory:fb120000-fb140000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1587 errors:0 dropped:0 overruns:0 frame:0 TX packets:1587 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:152089 (152.0 KB) TX bytes:152089 (152.0 KB) I am really at a loss for what to do. I am relatively new to Ubuntu, searched the other user questions and couldn't figure this out.

    Read the article

  • Adding delay between damage

    - by iQue
    I have a bunch of enemies chasing my main-character, and if they intersect I want them to damage him and that's all good. The problem is that right now they damage him as long as they stand around him, every frame! and since it gets called every frame my character's HP reaches 0 almost instantly. I've tried adding delay and I've tried a timertask, but can't get it to work. This is the code I use to check for intersection: private void checkCollision(Canvas canvas) { synchronized (getHolder()) { Rect h1 = happy.getBounds(); for (int i = 0; i < enemies.size(); i++) { for (int j = 0; j < bullets.size(); j++) { Rect b1 = bullets.get(j).getBounds(); Rect e1 = enemies.get(i).getBounds(); if (b1.intersect(e1)) { enemies.get(i).damageHP(5); bullets.remove(j); } if(e1.intersect(h1)){ happy.damageHP(5); // this is the statement that needs some sort of delay, I want them to damage him every 2 seconds they intersect him. } if(enemies.get(i).getHP() <= 0){ enemies.get(i).death(canvas, enemies); score.incScore(5); break; } if(happy.getHP() <= 0){ score.incScore(-50); //end-screen } } } } } If anyone knows the logic to do this please do tell.

    Read the article

  • Realistic Jumping

    - by Seth Taddiken
    I want to make the jumping that my character does more realistic. This is what I've tried so far but it doesn't seem very realistic when the player jumps. I want it to jump up at a certain speed then slow down as it gets to the top then eventually stopping (for about one frame) and then slowly going back down but going faster and faster as it goes back down. I've been trying to make the speed at which the player jumps up slow down by one each frame then become negative and go down faster... but it doesn't work very well public bool isPlayerDown = true; public bool maxJumpLimit = false; public bool gravityReality = false; public bool leftWall = false; public bool rightWall = false; public float x = 76f; public float y = 405f; if (Keyboard.GetState().IsKeyDown(up) && this.isPlayerDown == true && this.y <= 405f) { this.isPlayerDown = false; } if (this.isPlayerDown == false && this.maxJumpLimit == false) { this.y = this.y - 6; } if (this.y <= 200) { this.maxJumpLimit = true; } if (this.isPlayerDown == true) { this.y = 405f; this.isPlayerDown = true; this.maxJumpLimit = false; } if (this.gravityReality == true) { this.y = this.y + 2f; this.gravityReality = false; } if (this.maxJumpLimit == true) { this.y = this.y + 2f; this.gravityReality = true; } if (this.y > 405f) { this.isPlayerDown = true; }

    Read the article

  • How to make ip Address static (eth0)

    - by Jordan Angelucci
    I'm having a really hard time configuring ubuntu 13.04 to have a static ip address. I have tried multiple solutions but everytime I reboot (can't do the network reset command because ubuntu freezes) I end up with no connection. Here is what I get when i type ifconfig into the terminal: eth0 Link encap:Ethernet HWaddr 10:bf:48:bc:07:cb inet addr:192.168.0.8 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::12bf:48ff:febc:7cb/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1763067 errors:0 dropped:0 overruns:0 frame:0 TX packets:1024326 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2284491220 (2.2 GB) TX bytes:136809317 (136.8 MB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:1840 errors:0 dropped:0 overruns:0 frame:0 TX packets:1840 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:185688 (185.6 KB) TX bytes:185688 (185.6 KB) I have also tried this: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.0.160 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 dns-nameservers 24.222.0.94 dns-nameservers 24.222.0.95 If anyone could help me it would be very much appreciated.

    Read the article

  • Dealing with numerous, simultaneous sounds in unity

    - by luxchar
    I've written a custom class that creates a fixed number of audio sources. When a new sound is played, it goes through the class, which creates a queue of sounds that will be played during that frame. The sounds that are closer to the camera are given preference. If new sounds arrive in the next frame, I have a complex set of rules that determines how to replace the old ones. Ideally, "big" or "important" sounds should not be replaced by small ones. Sound replacement is necessary since the game can be fast-paced at times, and should try to play new sounds by replacing old ones. Otherwise, there can be "silent" moments when an old sound is about to stop playing and isn't replaced right away by a new sound. The drawback of replacing old sounds right away is that there is a harsh transition from the old sound clip to the new one. But I wonder if I could just remove that management logic altogether, and create audio sources on the fly for new sounds. I could give "important" sounds more priority (closer to 0 in the corresponding property) as opposed to less important ones, and let Unity take care of culling out sound effects that exceed the channel limit. The only drawback is that it requires many heap allocations. I wonder what strategy people use here?

    Read the article

  • How to run around another football player

    - by Lumis
    I have finished a simple 2D one-on-one indoor football Android game. The thing that it seemed so simple to me, a human being, turned out to be difficult for a computer: how to go around the opponent … At the moment the game logic of the computer player is that if it hits into the human player will step back few points on the pixel greed and then try again to go towards the ball. The problem is if the human player is in-between then the computer player will oscillate in one place, which does not look very nice and the human opponent can use this weakness to control the game. You can see this in the photo – at the moment the computer will go along the red line indefinitely. I tried few ideas but it proved not easy to do it when both the human player and the ball are constantly moving so at each step computer would change directions and “oscillate” again. Once when the computer player reaches the ball it will kick it with certain amount of random strength and direction towards the human’s goal. The question here is how to formulate the logic of going around the ever moving human opponent and how to translate it into the co-ordinate system and frame by frame animation… any suggestions welcome.

    Read the article

  • Google Music Player doesn't work

    - by EricoPF
    I'm trying to log in on Google Music Player application but doesn't work. I get the message below: Login Failed Could not identify your computer. Learn More On Google Help it says that it doesn't run on virtual machines, which is not my case, but I have a virtualbox installed though, and it says some people get to work if they disable their bridge network. The thing is I don't have any bridge interface, even if I remove all virtualbox modules I still get this message. This is my ifconfig output: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:15374 errors:0 dropped:0 overruns:0 frame:0 TX packets:15374 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1455889 (1.4 MB) TX bytes:1455889 (1.4 MB) wlan0 Link encap:Ethernet HWaddr 94:db:c9:b2:1b:d7 inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::96db:c9ff:feb2:1bd7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:828467 errors:0 dropped:0 overruns:0 frame:0 TX packets:568040 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1086663025 (1.0 GB) TX bytes:72984931 (72.9 MB) Any ideias ? Thanks guys! Cheers.

    Read the article

  • What's the best way to handle numerous recurring log entries in game loop?

    - by Kaa
    I have a custom logging system, use of which is scattered all over the engine and game. The system is linked to a "LogStore" that has an std::vector<string> logs[NUM_LOG_TYPES] - each vector corresponds with it's log type (info, error, debug, etc.). There's one extra std::vector that has "coordinates" to all log entries in the order they were received. Now, all the logging output is also displayed inside my development console in the game. The game console is handled by HTML-type GUI and therefore requires a new <p> element being added for each log output. My problem is that the log entries that are generated in the main loop each frame freeze the engine, because they continue to add elements to the in-game console, and if the console or guy generates a warning - that creates an infinite logging loop. I want to solve it by handling the recurring log entries in an elegant way that lets you know that something is critically wrong, but won't freeze the engine - like displaying the count of errors in the last 60 frames instead of displaying errors themselves. But how do you guys handle this? Does anyone know any nifty tricks to do this? I understand the question may sound vague, but if someone came across this type of issue I'm sure they would know exactly what's happening. Example problematic log entries: OpenGL warnings (I actually do check for errors every frame in many places) Really any prints anywhere in the main loop (may be debugging, may be warnings)

    Read the article

  • MWE function error message saying "use of undeclared indentifier" [migrated]

    - by nimbus
    I'm unsure about how to make MWE with Objective C, so if you need anything else let me know. I am trying running through a tutorial on building an iPhone app and have gotten stuck defining a function. I keep getting an error message saying "use of undeclared indentifier". However I believe I have initiated the function. In the view controller I have: if (scrollAmount > 0) { moveViewUp = YES; [scrollTheView:YES]; } else{ moveViewUp = NO; } with the function under it - (void)scrollTheView:(BOOL)movedUp { [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:0.3]; CGRect rect = self.view.frame; if (movedUp){ rect.origin.y -= scrollAmount; } else { rect.origin.y += scrollAmount; } self.view.frame = rect; [UIView commitAnimations]; } I have initiated the function in the header file (that I have imported). - (void)scrollTheView:(BOOL)movedUp;

    Read the article

  • Unable to get wireless card working properly in DELL D620 laptop - Zorin OS 6.0 (3.2.0)

    - by ElliTech
    I have a DELL D620 laptop running Zorin OS 6.0 (3.2.0) The wireless was working prior to running Update Manager. Here is the output from some commands: dell@dell-lt:/$ rfkill list all 0: dell-wifi: Wireless LAN Soft blocked: no Hard blocked: no dell@dell-lt:/$ iwconfig lo no wireless extensions. eth0 no wireless extensions. dell@dell-lt:/$ ifconfig -a eth0 Link encap:Ethernet HWaddr 00:18:8b:d4:d8:2f inet addr:192.168.1.74 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::218:8bff:fed4:d82f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:22227 errors:0 dropped:0 overruns:0 frame:0 TX packets:15027 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:26452734 (26.4 MB) TX bytes:1708955 (1.7 MB) Interrupt:18 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) I am at a loss as to how to get the wireless working!!! ElliTech

    Read the article

  • ControlTemplate Exception: "XamlParseException: Cannot find a Resource with the Name/Key"

    - by akaphenom
    If I move the application resources to a UserControl resources everythgin runs groovy. I still don't understand why. I noticed that my application object MyApp did more than inherit from Application it loaded an XAML for the main template and connected all the plumbing. So I decided to create a user control to remove the template from the applciation (thinking there may be an even order issue not allowing my resource to be found). namespace Module1 type Template() as this = //inherit UriUserControl("/FSSilverlightApp;component/template.xaml", "") inherit UriUserControl("/FSSilverlightApp;component/templateSimple.xaml", "") do Application.LoadComponent(this, base.uri) let siteTemplate : Grid = (this.Content :?> FrameworkElement) ? siteTemplate let nav : Frame = siteTemplate ? contentFrame let pages : UriUserControl array = [| new Module1.Page1() :> UriUserControl ; new Module1.Page2() :> UriUserControl ; new Module1.Page3() :> UriUserControl ; new Module1.Page4() :> UriUserControl ; new Module1.Page5() :> UriUserControl ; |] do nav.Navigate((pages.[0] :> INamedUriProvider).Uri) |> ignore type MyApp() as this = inherit Application() do Application.LoadComponent(this, new System.Uri("/FSSilverlightApp;component/App.xaml", System.UriKind.Relative)) do System.Windows.Browser.HtmlPage.Plugin.Focus() this.RootVisual <- new Template() ; // test code to check for the existance of the ControlTemplate - it exists let a = Application.Current.Resources.MergedDictionaries let b = a.[0] let c = b.Count let d : ControlTemplate = downcast c.["TransitioningFrame"] () "/FSSilverlightApp;component/templateSimple.xaml" <UserControl x:Class="Module1.Template" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Grid HorizontalAlignment="Center" Background="White" Name="siteTemplate"> <StackPanel Grid.Row="3" Grid.Column="2" Name="mainPanel"> <!--Template="{StaticResource TransitioningFrame}"--> <navigation:Frame Name="contentFrame" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Template="{StaticResource TransitioningFrame}"/> </StackPanel> </Grid> </UserControl> "/FSSilverlightApp;component/App.xaml" <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Module1.MyApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/FSSilverlightApp;component/TransitioningFrame.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> "/FSSilverlightApp;component/TransitioningFrame.xaml" <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ControlTemplate x:Key="TransitioningFrame" TargetType="navigation:Frame"> <Border Background="Olive" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="5" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"> <ContentPresenter Cursor="{TemplateBinding Cursor}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="{TemplateBinding Padding}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" Content="{TemplateBinding Content}"/> </Border> </ControlTemplate> </ResourceDictionary> Unfortunately that did not work. If I remove the template attribute from the navigationFrame element, the app loads and directs the content area to the first page in the pages array. Referencing that resource continues to throw a resource no found error. Original Post I have the following app.xaml (using Silverlight 3) <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Module1.MyApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/FSSilverlightApp;component/TransitioningFrame.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> and content template: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ControlTemplate x:Key="TransitioningFrame" TargetType="navigation:Frame"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"> <ContentPresenter Cursor="{TemplateBinding Cursor}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="{TemplateBinding Padding}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" Content="{TemplateBinding Content}"/> </Border> </ControlTemplate> </ResourceDictionary> The debugger says it the contentTemplate is loaded correctly by adding some minimal code: type MyApp() as this = inherit Application() do Application.LoadComponent(this, new System.Uri("/FSSilverlightApp;component/App.xaml", System.UriKind.Relative)) let cc = new ContentControl() let mainGrid : Grid = loadXaml("MainWindow.xaml") do this.Startup.Add(this.startup) let t = Application.Current.Resources.MergedDictionaries let t1 = t.[0] let t2 = t1.Count let t3: ControlTemplate = t1.["TransitioningFrame"] With this line in my main.xaml <navigation:Frame Name="contentFrame" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Template="{StaticResource TransitioningFrame}"/> Yields this exception Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Timestamp: Mon, 24 May 2010 23:10:15 UTC Message: Unhandled Error in Silverlight Application Code: 4004 Category: ManagedRuntimeError Message: System.Windows.Markup.XamlParseException: Cannot find a Resource with the Name/Key TransitioningFrame [Line: 86 Position: 115] at MS.Internal.XcpImports.CreateFromXaml(String xamlString, Boolean createNamescope, Boolean requireDefaultNamespace, Boolean allowEventHandlers, Boolean expandTemplatesDuringParse) at MS.Internal.XcpImports.CreateFromXaml(String xamlString, Boolean createNamescope, Boolean requireDefaultNamespace, Boolean allowEventHandlers) at System.Windows.Markup.XamlReader.Load(String xaml) at Globals.loadXaml[T](String xamlPath) at Module1.MyApp..ctor() Line: 54 Char: 13 Code: 0 URI: file:///C:/fsharp/FSSilverlightDemo/FSSilverlightApp/bin/Debug/SilverlightApplication2TestPage.html

    Read the article

  • Kernel panic when bringing up DRBD resource

    - by sc.
    I'm trying to set up two machines synchonizing with DRBD. The storage is setup as follows: PV - LVM - DRBD - CLVM - GFS2. DRBD is set up in dual primary mode. The first server is set up and running fine in primary mode. The drives on the first server have data on them. I've set up the second server and I'm trying to bring up the DRBD resources. I created all the base LVM's to match the first server. After initializing the resources with `` drbdadm create-md storage I'm bringing up the resources by issuing drbdadm up storage After issuing that command, I get a kernel panic and the server reboots in 30 seconds. Here's a screen capture. My configuration is as follows: OS: CentOS 6 uname -a Linux host.structuralcomponents.net 2.6.32-279.5.2.el6.x86_64 #1 SMP Fri Aug 24 01:07:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux rpm -qa | grep drbd kmod-drbd84-8.4.1-2.el6.elrepo.x86_64 drbd84-utils-8.4.1-2.el6.elrepo.x86_64 cat /etc/drbd.d/global_common.conf global { usage-count yes; # minor-count dialog-refresh disable-ip-verification } common { handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o /proc/sysrq-trigger ; halt -f"; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb become-primary-on both; wfc-timeout 30; degr-wfc-timeout 10; outdated-wfc-timeout 10; } options { # cpu-mask on-no-data-accessible } disk { # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes # disk-drain md-flushes resync-rate resync-after al-extents # c-plan-ahead c-delay-target c-fill-target c-max-rate # c-min-rate disk-timeout } net { # protocol timeout max-epoch-size max-buffers unplug-watermark # connect-int ping-int sndbuf-size rcvbuf-size ko-count # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri # after-sb-1pri after-sb-2pri always-asbp rr-conflict # ping-timeout data-integrity-alg tcp-cork on-congestion # congestion-fill congestion-extents csums-alg verify-alg # use-rle protocol C; allow-two-primaries yes; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; } } cat /etc/drbd.d/storage.res resource storage { device /dev/drbd0; meta-disk internal; on host.structuralcomponents.net { address 10.10.1.120:7788; disk /dev/vg_storage/lv_storage; } on host2.structuralcomponents.net { address 10.10.1.121:7788; disk /dev/vg_storage/lv_storage; } /var/log/messages is not logging anything about the crash. I've been trying to find a cause of this but I've come up with nothing. Can anyone help me out? Thanks.

    Read the article

  • Expert iptables help needed?

    - by Asad Moeen
    After a detailed analysis, I collected these details. I am under a UDP Flood which is more of application dependent. I run a Game-Server and an attacker is flooding me with "getstatus" query which makes the GameServer respond by making the replies to the query which cause output to the attacker's IP as high as 30mb/s and server lag. Here are the packet details, Packet starts with 4 bytes 0xff and then getstatus. Theoretically, the packet is like "\xff\xff\xff\xffgetstatus " Now that I've tried a lot of iptables variations like state and rate-limiting along side but those didn't work. Rate Limit works good but only when the Server is not started. As soon as the server starts, no iptables rule seems to block it. Anyone else got more solutions? someone asked me to contact the provider and get it done at the Network/Router but that looks very odd and I believe they might not do it since that would also affect other clients. Responding to all those answers, I'd say: Firstly, its a VPS so they can't do it for me. Secondly, I don't care if something is coming in but since its application generated so there has to be a OS level solution to block the outgoing packets. At least the outgoing ones must be stopped. Secondly, its not Ddos since just 400kb/s input generates 30mb/s output from my GameServer. That never happens in a D-dos. Asking the provider/hardware level solution should be used in that case but this one is different. And Yes, Banning his IP stops the flood of outgoing packets but he has many more IP-Addresses as he spoofs his original so I just need something to block him automatically. Even tried a lot of Firewalls but as you know they are just front-ends to iptables so if something doesn't work on iptables, what would the firewalls do? These were the rules I tried, iptables -A INPUT -p udp -m state --state NEW -m recent --set --name DDOS --rsource iptables -A INPUT -p udp -m state --state NEW -m recent --update --seconds 1 --hitcount 5 --name DDOS --rsource -j DROP It works for the attacks on un-used ports but when the server is listening and responding to the incoming queries by the attacker, it never works. Okay Tom.H, your rules were working when I modified them somehow like this: iptables -A INPUT -p udp -m length --length 1:1024 -m recent --set --name XXXX --rsource iptables -A INPUT -p udp -m string --string "xxxxxxxxxx" --algo bm --to 65535 -m recent --update --seconds 1 --hitcount 15 --name XXXX --rsource -j DROP They worked for about 3 days very good where the string "xxxxxxxxx" would be rate-limited, blocked if someone flooded and also didn't affect the clients. But just today, I tried updating the chain to try to remove a previously blocked IP so for that I had to flush the chain and restore this rule ( iptables -X and iptables -F ), some clients were already connected to servers including me. So restoring the rules now would also block some of the clients string completely while some are not affected. So does this mean I need to restart the server or why else would this happen because the last time the rules were working, there was no one connected?

    Read the article

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • CPU Utilization LAMP stack

    - by Max
    We've got an ec2 m2.4xlarge running Magento (centos 5.6, httpd 2.2, php 5.2.17 with eaccelerator 0.9.5.3, mysql 5.1.52). Right now we're getting a large traffic spike, and our top looks like this: top - 09:41:29 up 31 days, 1:12, 1 user, load average: 120.01, 129.03, 113.23 Tasks: 1190 total, 18 running, 1172 sleeping, 0 stopped, 0 zombie Cpu(s): 97.3%us, 1.8%sy, 0.0%ni, 0.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st Mem: 71687720k total, 36898928k used, 34788792k free, 49692k buffers Swap: 880737784k total, 0k used, 880737784k free, 1586524k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2433 mysql 15 0 23.6g 4.5g 7112 S 564.7 6.6 33607:34 mysqld 24046 apache 16 0 411m 65m 28m S 26.4 0.1 0:09.05 httpd 24360 apache 15 0 410m 60m 25m S 26.4 0.1 0:03.65 httpd 24993 apache 16 0 410m 57m 21m S 26.1 0.1 0:01.41 httpd 24838 apache 16 0 428m 74m 20m S 24.8 0.1 0:02.37 httpd 24359 apache 16 0 411m 62m 26m R 22.3 0.1 0:08.12 httpd 23850 apache 15 0 411m 64m 27m S 16.8 0.1 0:14.54 httpd 25229 apache 16 0 404m 46m 17m R 10.2 0.1 0:00.71 httpd 14594 apache 15 0 404m 63m 34m S 8.4 0.1 1:10.26 httpd 24955 apache 16 0 404m 50m 21m R 8.4 0.1 0:01.66 httpd 24313 apache 16 0 399m 46m 22m R 8.1 0.1 0:02.30 httpd 25119 apache 16 0 411m 59m 23m S 6.8 0.1 0:01.45 httpd Questions: Would giving msyqld more memory help it cache queries and react faster? If so, how? Other than splitting mysql and php to separate servers (which we're about to do) is there anything else we could/should be doing? Thanks! UPDATE: Here's our my.cnf along with the output of mysqltuner. It looks like a cache problem. Thanks again! # cat /etc/my.cnf [client] port = **** socket = /var/lib/mysql/mysql.sock [mysqld] datadir=/mnt/persistent/mysql port=**** socket=/var/lib/mysql/mysql.sock key_buffer = 512M max_allowed_packet = 64M table_cache = 1024 sort_buffer_size = 8M read_buffer_size = 4M read_rnd_buffer_size = 2M myisam_sort_buffer_size = 64M thread_cache_size = 128M tmp_table_size = 128M join_buffer_size = 1M query_cache_limit = 2M query_cache_size= 64M query_cache_type = 1 max_connections = 1000 thread_stack = 128K thread_concurrency = 48 log-bin=mysql-bin server-id = 1 wait_timeout = 300 innodb_data_home_dir = /mnt/persistent/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size = 20G innodb_additional_mem_pool_size = 20M innodb_log_file_size = 64M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_thread_concurrency = 48 ft_min_word_len=3 [myisamchk] ft_min_word_len=3 key_buffer = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M # ./mysqltuner.pl >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.52-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB +Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 2G (Tables: 26) [--] Data in InnoDB tables: 749M (Tables: 250) [!!] Total fragmented tables: 262 -------- Security Recommendations ------------------------------------------- -------- Performance Metrics ------------------------------------------------- [--] Up for: 31d 2h 30m 38s (680M q [253.371 qps], 2M conn, TX: 4825B, RX: 236B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 20.6G global + 15.1M per thread (1000 max threads) [OK] Maximum possible memory usage: 35.4G (51% of installed RAM) [OK] Slow queries: 0% (35K/680M) [OK] Highest usage of available connections: 53% (537/1000) [OK] Key buffer size / total MyISAM indexes: 512.0M/457.2M [OK] Key buffer hit rate: 100.0% (9B cached / 264K reads) [OK] Query cache efficiency: 42.3% (260M cached / 615M selects) [!!] Query cache prunes per day: 4384652 [OK] Sorts requiring temporary tables: 0% (1K temp sorts / 38M sorts) [!!] Joins performed without indexes: 100404 [OK] Temporary tables created on disk: 17% (7M on disk / 45M total) [OK] Thread cache hit rate: 99% (537 created / 2M connections) [!!] Table cache hit rate: 0% (1K open / 946K opened) [OK] Open file limit used: 9% (453/5K) [OK] Table locks acquired immediately: 99% (758M immediate / 758M locks) [OK] InnoDB data size / buffer pool: 749.3M/20.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 64M) join_buffer_size (> 1.0M, or always use indexes with joins) table_cache (> 1024)

    Read the article

  • Email forwarding from my domain to gmail - FAIL

    - by pitosalas
    [There are numerous similar questions on ServerFault but I couldn't find one that was exactly on point] Background: I use Gmail for my email client. My email is [email protected]. However the email that people communicate to me with is [email protected]. I run the server that hosts www.example.com and other domains, at ServerBeach. Up to yesterday, I had SENDMAIL painlessly just forward emails to [email protected] to [email protected] and everything was fine, for several years in fact. Suddenly my email stopped working - that is, my gmail account stopped receiving emails via the forward from my server. Looking into it I found a bunch of emails sitting on my server with content like this: ... while talking to gmail-smtp-in.l.google.com.: RCPT To: <<< 450-4.2.1 The user you are trying to contact is receiving mail at a rate that <<< 450-4.2.1 prevents additional messages from being delivered. Please resend your <<< 450-4.2.1 message at a later time. If the user is able to receive mail at that <<< 450-4.2.1 time, your message will be delivered. For more information, please <<< 450 4.2.1 visit xxxxxx://mail.google.com/support/bin/answer.py?answer=6592 u15si37138086qco.76 [email protected]... Deferred: 450-4.2.1 The user you are trying to contact is receiving mail at a rate that DATA <<< 550-5.7.1 [64.34.168.137 1] Our system has detected an unusual rate of <<< 550-5.7.1 unsolicited mail originating from your IP address. To protect our <<< 550-5.7.1 users from spam, mail sent from your IP address has been blocked. <<< 550-5.7.1 Please visit xxxxx://www.google.com/mail/help/bulk_mail.html to review <<< 550 5.7.1 our Bulk Email Senders Guidelines. u15si37138086qco.76 554 5.0.0 Service unavailable ... while talking to alt1.gmail-smtp-in.l.google.com.: From what I've been researching, I think somehow someone has/is hijacking my domain name or something and this somehow has caused gmail's servers to notice and cut me off. But I don't know really what's going on nor do I see whatever emails might be involved. I've read stuff on zoneedit.com that sounds like they might have a solution in their service for what I am trying to do. I also read a lot about admining DNS and SENDMAIL and tried various things, but nothing works. Can you tell from my description what is going on that caused GMail's server to stop accepting email from my server and is there a way to stop it? What is the 'correct' way to configure things so that emails to [email protected] behave as if they were sent to [email protected]? Thanks so much!

    Read the article

  • mySQL Optimization Suggestions

    - by Brian Schroeter
    I'm trying to optimize our mySQL configuration for our large Magento website. The reason I believe that mySQL needs to be configured further is because New Relic has shown that our SELECT queries are taking a long time (20,000+ ms) in some categories. I ran MySQLTuner 1.3.0 and got the following results... (Disclaimer: I restarted mySQL earlier after tweaking some settings, and so the results here may not be 100% accurate): >> MySQLTuner 1.3.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering [OK] Currently running supported MySQL version 5.5.37-35.0 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM [--] Data in MyISAM tables: 7G (Tables: 332) [--] Data in InnoDB tables: 213G (Tables: 8714) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [--] Data in MEMORY tables: 0B (Tables: 353) [!!] Total fragmented tables: 5492 -------- Security Recommendations ------------------------------------------- [!!] User '@host5.server1.autopartsnetwork.com' has no password set. [!!] User '@localhost' has no password set. [!!] User 'root@%' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 5h 3m 4s (5M q [317.443 qps], 42K conn, TX: 18B, RX: 2B) [--] Reads / Writes: 95% / 5% [--] Total buffers: 35.5G global + 184.5M per thread (1024 max threads) [!!] Maximum possible memory usage: 220.0G (174% of installed RAM) [OK] Slow queries: 0% (6K/5M) [OK] Highest usage of available connections: 5% (61/1024) [OK] Key buffer size / total MyISAM indexes: 512.0M/3.1G [OK] Key buffer hit rate: 100.0% (102M cached / 45K reads) [OK] Query cache efficiency: 66.9% (3M cached / 5M selects) [!!] Query cache prunes per day: 3486361 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 812K sorts) [!!] Joins performed without indexes: 1328 [OK] Temporary tables created on disk: 11% (126K on disk / 1M total) [OK] Thread cache hit rate: 99% (61 created / 42K connections) [!!] Table cache hit rate: 19% (9K open / 49K opened) [OK] Open file limit used: 2% (712/25K) [OK] Table locks acquired immediately: 100% (5M immediate / 5M locks) [!!] InnoDB buffer pool / data size: 32.0G/213.4G [OK] InnoDB log waits: 0 -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increasing the query_cache size over 128M may reduce performance Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Read this before increasing table_cache over 64: http://bit.ly/1mi7c4C Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 512M) [see warning above] join_buffer_size (> 128.0M, or always use indexes with joins) table_cache (> 12288) innodb_buffer_pool_size (>= 213G) My my.cnf configuration is as follows... [client] port = 3306 [mysqld_safe] nice = 0 [mysqld] tmpdir = /var/lib/mysql/tmp user = mysql port = 3306 skip-external-locking character-set-server = utf8 collation-server = utf8_general_ci event_scheduler = 0 key_buffer = 512M max_allowed_packet = 64M thread_stack = 512K thread_cache_size = 512 sort_buffer_size = 24M read_buffer_size = 8M read_rnd_buffer_size = 24M join_buffer_size = 128M # for some nightly processes client sessions set the join buffer to 8 GB auto-increment-increment = 1 auto-increment-offset = 1 myisam-recover = BACKUP max_connections = 1024 # max connect errors artificially high to support behaviors of NetScaler monitors max_connect_errors = 999999 concurrent_insert = 2 connect_timeout = 5 wait_timeout = 180 net_read_timeout = 120 net_write_timeout = 120 back_log = 128 # this table_open_cache might be too low because of MySQL bugs #16244691 and #65384) table_open_cache = 12288 tmp_table_size = 512M max_heap_table_size = 512M bulk_insert_buffer_size = 512M open-files-limit = 8192 open-files = 1024 query_cache_type = 1 # large query limit supports SOAP and REST API integrations query_cache_limit = 4M # larger than 512 MB query cache size is problematic; this is typically ~60% full query_cache_size = 512M # set to true on read slaves read_only = false slow_query_log_file = /var/log/mysql/slow.log slow_query_log = 0 long_query_time = 0.2 expire_logs_days = 10 max_binlog_size = 1024M binlog_cache_size = 32K sync_binlog = 0 # SSD RAID10 technically has a write capacity of 10000 IOPS innodb_io_capacity = 400 innodb_file_per_table innodb_table_locks = true innodb_lock_wait_timeout = 30 # These servers have 80 CPU threads; match 1:1 innodb_thread_concurrency = 48 innodb_commit_concurrency = 2 innodb_support_xa = true innodb_buffer_pool_size = 32G innodb_file_per_table innodb_flush_log_at_trx_commit = 1 innodb_log_buffer_size = 2G skip-federated [mysqldump] quick quote-names single-transaction max_allowed_packet = 64M I have a monster of a server here to power our site because our catalog is very large (300,000 simple SKUs), and I'm just wondering if I'm missing anything that I can configure further. :-) Thanks!

    Read the article

  • need assistance with my.cnf - 1500% CPU usage

    - by Alan Long
    I'm running into a few issues with our new database server. It is a HP G8 with 2 INTEL XEON E5-2650 processors and 32GB of ram. This server is dedicated as a MySQL server (5.1.69) for our intranet portal. I have been having issues with this server staying alive - I notice high CPU usage during certain times of day (8% ~ 1500%+) and see very low memory usage (7 ~ 15%) based on using the 'top' command. When the CPU usage passes 1000%, that is when the app usually dies. I'm trying to see what I'm doing wrong with the config file, hopefully one of the experts can chime in and let me know what they think. See below for my.cnf file: [mysqld] default-storage-engine=InnoDB datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock #user=mysql large-pages # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_connections=275 tmp_table_size=1G key_buffer_size=384M key_buffer=384M thread_cache_size=1024 long_query_time=5 low_priority_updates=1 max_heap_table_size=1G myisam_sort_buffer_size=8M concurrent_insert=2 table_cache=1024 sort_buffer_size=8M read_buffer_size=5M read_rnd_buffer_size=6M join_buffer_size=16M table_definition_cache=6k open_files_limit=8k slow_query_log #skip-name-resolve # Innodb Settings innodb_buffer_pool_size=18G innodb_thread_concurrency=0 innodb_log_file_size=1G innodb_log_buffer_size=16M innodb_flush_log_at_trx_commit=2 innodb_lock_wait_timeout=50 innodb_file_per_table #innodb_buffer_pool_instances=4 #eliminating double buffering innodb_flush_method = O_DIRECT flush_time=86400 innodb_additional_mem_pool_size=40M #innodb_io_capacity = 5000 #innodb_read_io_threads = 64 #innodb_write_io_threads = 64 # increase until threads_created doesnt grow anymore thread_cache=1024 query_cache_type=1 query_cache_limit=4M query_cache_size=256M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 0 wait_timeout = 1800 connect_timeout = 10 interactive_timeout = 60 [mysqldump] max_allowed_packet=32M [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid log-slow-queries=/var/log/mysql/slow-queries.log long_query_time = 1 log-queries-not-using-indexes we connect to one database with 75 tables, the largest table has 1,150,000 entries and the second largest has 128,036 entries. I have also verified that our PHP queries are optimized as best as possible. Reference - MySQLtuner: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.69-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 420M (Tables: 75) [!!] Total fragmented tables: 75 -------- Security Recommendations ------------------------------------------- [!!] User '[email protected]' has no password set. -------- Performance Metrics ------------------------------------------------- [--] Up for: 1h 14m 50s (8M q [1K qps], 705 conn, TX: 6B, RX: 892M) [--] Reads / Writes: 68% / 32% [--] Total buffers: 19.7G global + 35.2M per thread (275 max threads) [!!] Maximum possible memory usage: 29.1G (93% of installed RAM) [OK] Slow queries: 0% (472/8M) [OK] Highest usage of available connections: 66% (183/275) [OK] Key buffer size / total MyISAM indexes: 384.0M/91.0K [OK] Key buffer hit rate: 100.0% (173 cached / 0 reads) [OK] Query cache efficiency: 96.2% (7M cached / 7M selects) [!!] Query cache prunes per day: 553614 [OK] Sorts requiring temporary tables: 0% (3 temp sorts / 1K sorts) [!!] Temporary tables created on disk: 49% (3K on disk / 7K total) [OK] Thread cache hit rate: 74% (183 created / 705 connections) [OK] Table cache hit rate: 97% (231 open / 238 opened) [OK] Open file limit used: 0% (17/8K) [OK] Table locks acquired immediately: 100% (432K immediate / 432K locks) [OK] InnoDB data size / buffer pool: 420.9M/18.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Temporary table size is already large - reduce result set size Reduce your SELECT DISTINCT queries without LIMIT clauses Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 256M) [see warning above] Thanks in advanced for your help!

    Read the article

  • Python script is exiting with no output and I have no idea why

    - by Adam Tuttle
    I'm attempting to debug a Subversion post-commit hook that calls some python scripts. What I've been able to determine so far is that when I run post-commit.bat manually (I've created a wrapper for it to make it easier) everything succeeds, but when SVN runs it one particular step doesn't work. We're using CollabNet SVNServe, which I know from the documentation removes all environment variables. This had caused some problems earlier, but shouldn't be an issue now. Before Subversion calls a hook script, it removes all variables - including $PATH on Unix, and %PATH% on Windows - from the environment. Therefore, your script can only run another program if you spell out that program's absolute name. The relevant portion of post-commit.bat is: echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log set SITENAME=staging set SVNPATH=branches/staging/wwwroot/ "C:\Python3\python.exe" C:\svn-repos\company\hooks\svn2ftp.py ^ --svnUser="svnusername" ^ --svnPass="svnpassword" ^ --ftp-user=ftpuser ^ --ftp-password=ftppassword ^ --ftp-remote-dir=/ ^ --access-url=svn://10.0.100.6/company ^ --status-file="C:\svn-repos\company\hooks\svn2ftp-%SITENAME%.dat" ^ --project-directory=%SVNPATH% "staging.company.com" %1 %2 >> c:\svn-repos\company\hooks\svn2ftp.out.log echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log When I run post-commit.bat manually, for example: post-commit c:\svn-repos\company 12345, I see output like the following in svn2ftp.out.log: -------------------------- args1: c:\svn-repos\company args0: staging.company.com abspath: c:\svn-repos\company project_dir: branches/staging/wwwroot/ local_repos_path: c:\svn-repos\company getting youngest revision... done, up-to-date -------------------------- However, when I commit something to the repo and it runs automatically, the output is: -------------------------- -------------------------- svn2ftp.py is a bit long, so I apologize but here goes. I'll have some notes/disclaimers about its contents below it. #!/usr/bin/env python """Usage: svn2ftp.py [OPTION...] FTP-HOST REPOS-PATH Upload to FTP-HOST changes committed to the Subversion repository at REPOS-PATH. Uses svn diff --summarize to only propagate the changed files Options: -?, --help Show this help message. -u, --ftp-user=USER The username for the FTP server. Default: 'anonymous' -p, --ftp-password=P The password for the FTP server. Default: '@' -P, --ftp-port=X Port number for the FTP server. Default: 21 -r, --ftp-remote-dir=DIR The remote directory that is expected to resemble the repository project directory -a, --access-url=URL This is the URL that should be used when trying to SVN export files so that they can be uploaded to the FTP server -s, --status-file=PATH Required. This script needs to store the last successful revision that was transferred to the server. PATH is the location of this file. -d, --project-directory=DIR If the project you are interested in sending to the FTP server is not under the root of the repository (/), set this parameter. Example: -d 'project1/trunk/' This should NOT start with a '/'. 2008.5.2 CKS Fixed possible Windows-related bug with tempfile, where the script didn't have permission to write to the tempfile. Replaced this with a open()-created file created in the CWD. 2008.5.13 CKS Added error logging. Added exception for file-not-found errors when deleting files. 2008.5.14 CKS Change file open to 'rb' mode, to prevent Python's universal newline support from stripping CR characters, causing later comparisons between FTP and SVN to report changes. """ try: import sys, os import logging logging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename='svn2ftp.debug.log', filemode='a' ) console = logging.StreamHandler() console.setLevel(logging.ERROR) logging.getLogger('').addHandler(console) import getopt, tempfile, smtplib, traceback, subprocess from io import StringIO import pysvn import ftplib import inspect except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) print(stack_trace) #end capture sys.exit(1) #defaults host = "" user = "anonymous" password = "@" port = 21 repo_path = "" local_repos_path = "" status_file = "" project_directory = "" remote_base_directory = "" toAddrs = "[email protected]" youngest_revision = "" def email(toAddrs, message, subject, fromAddr='[email protected]'): headers = "From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n" % (fromAddr, toAddrs, subject) message = headers + message logging.info('sending email to %s...' % toAddrs) server = smtplib.SMTP('smtp.company.com') server.set_debuglevel(1) server.sendmail(fromAddr, toAddrs, message) server.quit() logging.info('email sent') def captureErrorMessage(e): sout = StringIO() traceback.print_exc(file=sout) errorMessage = '\n'+('*'*80)+('\n%s'%e)+('\n%s\n'%sout.getvalue())+('*'*80) return errorMessage def usage_and_exit(errmsg): """Print a usage message, plus an ERRMSG (if provided), then exit. If ERRMSG is provided, the usage message is printed to stderr and the script exits with a non-zero error code. Otherwise, the usage message goes to stdout, and the script exits with a zero errorcode.""" if errmsg is None: stream = sys.stdout else: stream = sys.stderr print(__doc__, file=stream) if errmsg: print("\nError: %s" % (errmsg), file=stream) sys.exit(2) sys.exit(0) def read_args(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision try: opts, args = getopt.gnu_getopt(sys.argv[1:], "?u:p:P:r:a:s:d:SU:SP:", ["help", "ftp-user=", "ftp-password=", "ftp-port=", "ftp-remote-dir=", "access-url=", "status-file=", "project-directory=", "svnUser=", "svnPass=" ]) except getopt.GetoptError as msg: usage_and_exit(msg) for opt, arg in opts: if opt in ("-?", "--help"): usage_and_exit() elif opt in ("-u", "--ftp-user"): user = arg elif opt in ("-p", "--ftp-password"): password = arg elif opt in ("-SU", "--svnUser"): svnUser = arg elif opt in ("-SP", "--svnPass"): svnPass = arg elif opt in ("-P", "--ftp-port"): try: port = int(arg) except ValueError as msg: usage_and_exit("Invalid value '%s' for --ftp-port." % (arg)) if port < 1 or port > 65535: usage_and_exit("Value for --ftp-port must be a positive integer less than 65536.") elif opt in ("-r", "--ftp-remote-dir"): remote_base_directory = arg elif opt in ("-a", "--access-url"): repo_path = arg elif opt in ("-s", "--status-file"): status_file = os.path.abspath(arg) elif opt in ("-d", "--project-directory"): project_directory = arg if len(args) != 3: print(str(args)) usage_and_exit("host and/or local_repos_path not specified (" + len(args) + ")") host = args[0] print("args1: " + args[1]) print("args0: " + args[0]) print("abspath: " + os.path.abspath(args[1])) local_repos_path = os.path.abspath(args[1]) print('project_dir:',project_directory) youngest_revision = int(args[2]) if status_file == "" : usage_and_exit("No status file specified") def main(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision read_args() #repository,fs_ptr #get youngest revision print("local_repos_path: " + local_repos_path) print('getting youngest revision...') #youngest_revision = fs.youngest_rev(fs_ptr) assert youngest_revision, "Unable to lookup youngest revision." last_sent_revision = get_last_revision() if youngest_revision == last_sent_revision: # no need to continue. we should be up to date. print('done, up-to-date') return if last_sent_revision or youngest_revision < 10: # Only compare revisions if the DAT file contains a valid # revision number. Otherwise we risk waiting forever while # we parse and uploading every revision in the repo in the case # where a repository is retroactively configured to sync with ftp. pysvn_client = pysvn.Client() pysvn_client.callback_get_login = get_login rev1 = pysvn.Revision(pysvn.opt_revision_kind.number, last_sent_revision) rev2 = pysvn.Revision(pysvn.opt_revision_kind.number, youngest_revision) summary = pysvn_client.diff_summarize(repo_path, rev1, repo_path, rev2, True, False) print('summary len:',len(summary)) if len(summary) > 0 : print('connecting to %s...' % host) ftp = FTPClient(host, user, password) print('connected to %s' % host) ftp.base_path = remote_base_directory print('set remote base directory to %s' % remote_base_directory) #iterate through all the differences between revisions for change in summary : #determine whether the path of the change is relevant to the path that is being sent, and modify the path as appropriate. print('change path:',change.path) ftp_relative_path = apply_basedir(change.path) print('ftp rel path:',ftp_relative_path) #only try to sync path if the path is in our project_directory if ftp_relative_path != "" : is_file = (change.node_kind == pysvn.node_kind.file) if str(change.summarize_kind) == "delete" : print("deleting: " + ftp_relative_path) try: ftp.delete_path("/" + ftp_relative_path, is_file) except ftplib.error_perm as e: if 'cannot find the' in str(e) or 'not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise elif str(change.summarize_kind) == "added" or str(change.summarize_kind) == "modified" : local_file = "" if is_file : local_file = svn_export_temp(pysvn_client, repo_path, rev2, change.path) print("uploading file: " + ftp_relative_path) ftp.upload_path("/" + ftp_relative_path, is_file, local_file) if is_file : os.remove(local_file) elif str(change.summarize_kind) == "normal" : print("skipping 'normal' element: " + ftp_relative_path) else : raise str("Unknown change summarize kind: " + str(change.summarize_kind) + ", path: " + ftp_relative_path) ftp.close() #write back the last revision that was synced print("writing last revision: " + str(youngest_revision)) set_last_revision(youngest_revision) # todo: undo def get_login(a,b,c,d): #arguments don't matter, we're always going to return the same thing try: return True, "svnUsername", "svnPassword", True except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) #end capture sys.exit(1) #functions for persisting the last successfully synced revision def get_last_revision(): if os.path.isfile(status_file) : f=open(status_file, 'r') line = f.readline() f.close() try: i = int(line) except ValueError: i = 0 else: i = 0 f = open(status_file, 'w') f.write(str(i)) f.close() return i def set_last_revision(rev) : f = open(status_file, 'w') f.write(str(rev)) f.close() #augmented ftp client class that can work off a base directory class FTPClient(ftplib.FTP) : def __init__(self, host, username, password) : self.base_path = "" self.current_path = "" ftplib.FTP.__init__(self, host, username, password) def cwd(self, path) : debug_path = path if self.current_path == "" : self.current_path = self.pwd() print("pwd: " + self.current_path) if not os.path.isabs(path) : debug_path = self.base_path + "<" + path path = os.path.join(self.current_path, path) elif self.base_path != "" : debug_path = self.base_path + ">" + path.lstrip("/") path = os.path.join(self.base_path, path.lstrip("/")) path = os.path.normpath(path) #by this point the path should be absolute. if path != self.current_path : print("change from " + self.current_path + " to " + debug_path) ftplib.FTP.cwd(self, path) self.current_path = path else : print("staying put : " + self.current_path) def cd_or_create(self, path) : assert os.path.isabs(path), "absolute path expected (" + path + ")" try: self.cwd(path) except ftplib.error_perm as e: for folder in path.split('/'): if folder == "" : self.cwd("/") continue try: self.cwd(folder) except: print("mkd: (" + path + "):" + folder) self.mkd(folder) self.cwd(folder) def upload_path(self, path, is_file, local_path) : if is_file: (path, filename) = os.path.split(path) self.cd_or_create(path) # Use read-binary to avoid universal newline support from stripping CR characters. f = open(local_path, 'rb') self.storbinary("STOR " + filename, f) f.close() else: self.cd_or_create(path) def delete_path(self, path, is_file) : (path, filename) = os.path.split(path) print("trying to delete: " + path + ", " + filename) self.cwd(path) try: if is_file : self.delete(filename) else: self.delete_path_recursive(filename) except ftplib.error_perm as e: if 'The system cannot find the' in str(e) or '550 File not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise def delete_path_recursive(self, path): if path == "/" : raise "WARNING: trying to delete '/'!" for node in self.nlst(path) : if node == path : #it's a file. delete and return self.delete(path) return if node != "." and node != ".." : self.delete_path_recursive(os.path.join(path, node)) try: self.rmd(path) except ftplib.error_perm as msg : sys.stderr.write("Error deleting directory " + os.path.join(self.current_path, path) + " : " + str(msg)) # apply the project_directory setting def apply_basedir(path) : #remove any leading stuff (in this case, "trunk/") and decide whether file should be propagated if not path.startswith(project_directory) : return "" return path.replace(project_directory, "", 1) def svn_export_temp(pysvn_client, base_path, rev, path) : # Causes access denied error. Couldn't deduce Windows-perm issue. # It's possible Python isn't garbage-collecting the open file-handle in time for pysvn to re-open it. # Regardless, just generating a simple filename seems to work. #(fd, dest_path) = tempfile.mkstemp() dest_path = tmpName = '%s.tmp' % __file__ exportPath = os.path.join(base_path, path).replace('\\','/') print('exporting %s to %s' % (exportPath, dest_path)) pysvn_client.export( exportPath, dest_path, force=False, revision=rev, native_eol=None, ignore_externals=False, recurse=True, peg_revision=rev ) return dest_path if __name__ == "__main__": logging.info('svnftp.start') try: main() logging.info('svnftp.done') except Exception as e: # capture the location of the error for debug purposes frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace[:-1]) print(stack_trace) # end capture error_text = '\nFATAL EXCEPTION!!!\n'+captureErrorMessage(e) subject = "ALERT: SVN2FTP Error" message = """An Error occurred while trying to FTP an SVN commit. repo_path = %(repo_path)s\n local_repos_path = %(local_repos_path)s\n project_directory = %(project_directory)s\n remote_base_directory = %(remote_base_directory)s\n error_text = %(error_text)s """ % globals() email(toAddrs, message, subject) logging.error(e) Notes/Disclaimers: I have basically no python training so I'm learning as I go and spending lots of time reading docs to figure stuff out. The body of get_login is in a try block because I was getting strange errors saying there was an unhandled exception in callback_get_login. Never figured out why, but it seems fine now. Let sleeping dogs lie, right? The username and password for get_login are currently hard-coded (but correct) just to eliminate variables and try to change as little as possible at once. (I added the svnuser and svnpass arguments to the existing argument parsing.) So that's where I am. I can't figure out why on earth it's not printing anything into svn2ftp.out.log. If you're wondering, the output for one of these failed attempts in svn2ftp.debug.log is: 2012-09-06 15:18:12,496 INFO svnftp.start 2012-09-06 15:18:12,496 INFO svnftp.done And it's no different on a successful run. So there's nothing useful being logged. I'm lost. I've gone way down the rabbit hole on this one, and don't know where to go from here. Any ideas?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >