Search Results

Search found 9446 results on 378 pages for 'ssh keys'.

Page 328/378 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • How can I make sound work without starting X?

    - by Magnus Hoff
    I have a headless machine connected to my sound system, and I am using it to run a music playing daemon that I control over the network. (Among other things) However, I can't seem to be able to have sound come out of my speakers without running X. I am running pulse audio in a system wide instance and my daemon is not running within X. Nevertheless, when my daemon is playing music without me hearing it, I can fix it by running startx in an unrelated session. After X starts, I can hear the sound. The sound disappears again if I kill the X server. Interestingly/annoyingly, the sound also stops after X has been running for a few minutes. This could possibly be because of a screen saver of some sort, but I haven't been able to verify or falsify this theory. So my current workaround is to ssh into the box whenever I want music and startx, and restart it every fifteen minutes or so. I'd like to do better. I have been able to verify the following: Adjustments in alsamixer have no effect on this problem. The relevant output channel is never muted In alsamixer, I can see no difference between when the sound is working and when it isn't Nothing is muted in pactl list There is no difference in the output from pactl list between before starting X and after it's started. (Except the identifier of the pactl instance connected to pulse, which is different each time you run pactl) The user running the music daemon is a member of the groups audio, pulse and pulse-access The music daemon program does not report any error messages and acts as if it is playing the music like it should Some form of dbus daemon is running. ps aux|grep dbus reports dbus-daemon --system --fork --activation=upstart before and after I have started X Some details about my hardware: Motherboard: http://www.asus.com/Motherboards/AT5IONTI_DELUXE/ Sound chip: Nvidia GPU 0b HDMI/DP (from alsamixer) Using HDMI for output (Machine also has an Intel Realtek ALC887 that I am not using) Output of lsmod: Module Size Used by deflate 12617 0 zlib_deflate 27139 1 deflate ctr 13201 0 twofish_generic 16635 0 twofish_x86_64_3way 25287 0 twofish_x86_64 12907 1 twofish_x86_64_3way twofish_common 20919 3 twofish_generic,twofish_x86_64_3way,twofish_x86_64 camellia 29348 0 serpent 29125 0 blowfish_generic 12530 0 blowfish_x86_64 21466 0 blowfish_common 16739 2 blowfish_generic,blowfish_x86_64 cast5 25112 0 des_generic 21415 0 xcbc 12815 0 rmd160 16744 0 bnep 18281 2 rfcomm 47604 12 sha512_generic 12796 0 crypto_null 12918 0 parport_pc 32866 0 af_key 36389 0 ppdev 17113 0 binfmt_misc 17540 1 nfsd 281980 2 ext2 73795 1 nfs 436929 1 lockd 90326 2 nfsd,nfs fscache 61529 1 nfs auth_rpcgss 53380 2 nfsd,nfs nfs_acl 12883 2 nfsd,nfs sunrpc 255224 16 nfsd,nfs,lockd,auth_rpcgss,nfs_acl btusb 18332 2 vesafb 13844 2 pl2303 17957 1 ath3k 12961 0 bluetooth 180153 24 bnep,rfcomm,btusb,ath3k snd_hda_codec_hdmi 32474 4 nvidia 11308613 0 ftdi_sio 40679 1 usbserial 47113 6 pl2303,ftdi_sio psmouse 97485 0 snd_hda_codec_realtek 224173 1 snd_hda_intel 33719 5 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel serio_raw 13211 0 snd_seq_midi 13324 0 snd_hwdep 17764 1 snd_hda_codec snd_pcm 97275 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61929 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 79041 20 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device asus_atk0110 18078 0 mac_hid 13253 0 jc42 13948 0 soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm coretemp 13554 0 i2c_i801 17570 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62154 0 Any ideas? What does X do that's so important?

    Read the article

  • 11.10 desktop alerts (volume change and terminal bell) stopped working but all other audio still works

    - by FlabbergastedPickle
    All, My sound works just fine in 11.10 64-bit install on HP dm1-4050 Sandy Bridge notebook (e.g. audio works in Banshee, flash, games, browser, Thunderbird email notification, etc.), but the core desktop notifications (e.g. pressing a tab in a terminal where there is more than one option should trigger a terminal bell, or changing volume using volume keys should be accompanied with the supporting "quack" that the volume app makes) do not work. I've intentionally disabled login sound as explained here on ask ubuntu but even enabling it back makes no difference. These notifications did work before just fine and I am not sure when did the actually stop working but it must've been fairly recently. Only things I did were trying to install some ppa edge xorg drivers for my intel card (a separate issue) but also reverted them all with ppa-purge once I discovered they did not improve anything. Other thing I did was check volume settings with alsamixer and did alsactl store for the soundcard after I did some experimenting with volume settings for PCM (on my laptop PCM at 100% crackles so I had to lower it and make pulseaudio ignore its setting as per ask ubuntu's page). That said, neither of these should have any bearing on the said notifications since the volume is up and they clearly work everywhere else but the core desktop events. The system ready drum sound when Ubuntu boots and user reaches the login screen also does not work. The guest login behaves exactly same as mine. Audio works (including the login sound since I've not disabled it for the guest account), but no quacks when changing the volume or terminal bell sounds... I've tried copying ubuntu sounds to /usr/share/sounds/ as suggested on ask ubuntu and that did not work. I also tried using dconf-editor to check sound theme settings and tried both freedesktop (which is what it was set to) and ubuntu, as suggested on ask ubuntu. This did not work either. I tried purging the ~/.pulse folder and the /tmp/*pulse* entries, rebooting and restarting pulseaudio with -D flag. While audio came back on and behaved just fine in all aspects (e.g. one can adjust volume levels, play music, games, in-browser sound stuff, and other app alerts) except for the system ready drum sound (at the login screen), and any system event (terminal bell and volume change quack sound). It is interesting that the quack sound works inside system settings-sound when adjusting levels there, but it does not when volume is changed via top bar's volume settings... I do recall that at one point yesterday when I was restarting pulseaudio the quacks that accompany volume change did start working but I have no idea what caused that. This was also when I first realized those alerts were not working. After rebooting it was again gone. I did compile my own 3.0.14-rt31 kernel a little while ago as instructed on one of the wiki's for the 11.10 rt kernel. Everything works as before except for the said sound alerts. I am not sure if this began happening since I started using the rt kernel though and yesterday's momentary ability to hear those quacks while changing the volume make me believe that the kernel is not one responsible for this problem. One more thing I can think of is that I used alsoft-conf tool to configure buffering on the OpenAL (due to TA Spring's choppy audio) and changed in there default audio device to ALSA. I also tried reverting it to Pulseaudio as the only allowed output but the bottom part of the Backend tab always reverts to ALSA even when I select Pulseaudio. The pulseaudio does remain as the only active choice on top. This, however, once again does not make any sense in terms of preventing desktop audio alerts when everything else including OpenAL games plays sound just fine... So, there you have it, as verbose as I could make it :-). I tried all I could find on this issue and had no luck so far... Any ideas?

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • The gestures of Windows 8 (Consumer preview): part 2, More about Search

    - by Laurent Bugnion
    This is part 2 of a multipart blog post about the gestures and shortcuts in Windows 8 consumer preview. Part 1 can be found here! More about the Search charm In the first installment of this series, we talked about the charms and mentioned a few gestures to display the Search charm. Search is a very central and powerful feature in Windows 8, and allows you to search in Apps, Settings, Files and within Metro applications that support the Search contract. There are a few cool features around the Search, and especially the applications associated to it. I already mentioned the keyboard shortcuts you can use: Win-C shows the Charms bar (same as swiping from the right bevel towards the center of the screen). Win-Q open the Search fly out with Apps preselected. Win-W open the Search fly out with Settings preselected. Win-F open the Search fly out with Files preselected. Searching in Metro apps In addition to these three search domains, you can also search a Metro app, as long as it supports the Search contract (check this Build video to learn more about the Search contract). These apps show up in the Search flyout as shown here: Notice the list of apps below the Files button? That’s what we are talking about. First of all, the list order changes when you search in some applications. For instance, in the image above, I had used the Store with the Search charm. This is why the store shows up as the first app. I am not 100% what algorithm is used here (sorting according to number of searches is my guess), but try it out and try to figure it out Applications that have never been searched are sorted alphabetically. Does it mean we will see cool app names like ___AAA_MyCoolApp? I certainly hope not!! Pinning You can also pin often used apps to the Search flyout. To pin an app with the mouse, right click on it in the Search flyout and select Pin from the context menu. With the keyboard, use the arrow keys to go down to the selected app, and then open the context menu. With the finger, simply tap and hold until you see a semi transparent rectangle indicating that the context menu will be shown, then release. The context menu opens up and you can select Pin. Pin context menu Pinned apps Unpinning, Hiding Using the same technique as for pinning here above, you can also unpin a pinned application. Finally, you can also choose to hide an app from the Search flyout altogether. This is a convenient way to clean up and make it easy to find stuff. Note: At this point, I am not sure how to re-add a hidden app to the Search flyout. If anyone knows, please mention it in the comments, thanks! Reordering You can also reorder pinned apps. To do this, with the finger, tap, hold and pull the app to the side, then pull it vertically to reorder it. You can also reorder with the mouse, simply by clicking on an app and pulling it vertically to the place you want to put it. I don’t think there is a way to do that with the keyboard though. That’s it for now More gestures will follow in a next installment! Have fun with Windows 8   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • My 2D collision code does not work as expected. How do I fix it?

    - by farmdve
    I have a simple 2D game with a tile-based map. I am new to game development, I followed the LazyFoo tutorials on SDL. The tiles are in a bmp file, but each tile inside it corresponds to an internal number of the type of tile(color, or wall). The game is simple, but the code is a lot so I can only post snippets. // Player moved out of the map if((player.box.x < 0)) player.box.x += GetVelocity(player, 0); if((player.box.y < 0)) player.box.y += GetVelocity(player, 1); if((player.box.x > (LEVEL_WIDTH - DOT_WIDTH))) player.box.x -= GetVelocity(player, 0); if((player.box.y > (LEVEL_HEIGHT - DOT_HEIGHT))) player.box.y -= GetVelocity(player, 1); // Now that we are here, we check for collisions if(touches_wall(player.box)) { if(player.box.x < player.prev_x) { player.box.x += GetVelocity(player, 0); } if(player.box.x > player.prev_x) { player.box.x -= GetVelocity(player, 0); } if(player.box.y < player.prev_y) { player.box.y += GetVelocity(player, 1); } if(player.box.y > player.prev_y) { player.box.y -= GetVelocity(player, 1); } } player.prev_x = player.box.x; player.prev_y = player.box.y; Let me explain, player is a structure with the following contents typedef struct { Rectangle box; //Player position on a map(tile or whatever). int prev_x, prev_y; // Previous positions int key_press[3]; // Stores which key was pressed/released. Limited to three keys. E.g Left,right and perhaps jump if possible in 2D int velX, velY; // Velocity for X and Y coordinate. //Health int health; bool main_character; uint32_t jump_ticks; } Player; And Rectangle is just a typedef of SDL_Rect. GetVelocity is a function that according to the second argument, returns the velocity for the X or Y axis. This code I have basically works, however inside the if(touches_wall(player.box)) if statement, I have 4 more. These 4 if statements are responsible for detecting collision on all 4 sides(up,down,left,right). However, they also act as a block for any other movement. Example: I move down the object and collide with the wall, as I continue to move down and still collide with the wall, I wish to move left or right, which is indeed possible(not to mention in 3D games), but remember the 4 if statements? They are preventing me from moving anywhere. The original code on the LazyFoo Productions website has no problems, but it was written in C++, so I had to rewrite most of it to work, which is probably where the problem comes from. I also use a different method of moving, than the one in the examples. Of course, that was just an example. I wish to be able to move no matter at which wall I collide. Before this bit of code, I had another one that had more logic in there, but it was flawed.

    Read the article

  • XNA Multiplayer Games and Networking

    - by JoshReuben
    ·        XNA communication must by default be lightweight – if you are syncing game state between players from the Game.Update method, you must minimize traffic. That game loop may be firing 60 times a second and player 5 needs to know if his tank has collided with any player 3 and the angle of that gun turret. There are no WCF ServiceContract / DataContract niceties here, but at the same time the XNA networking stack simplifies the details. The payload must be simplistic - just an ordered set of numbers that you would map to meaningful enum values upon deserialization.   Overview ·        XNA allows you to create and join multiplayer game sessions, to manage game state across clients, and to interact with the friends list ·        Dependency on Gamer Services - to receive notifications such as sign-in status changes and game invitations ·        two types of online multiplayer games: system link game sessions (LAN) and LIVE sessions (WAN). ·        Minimum dev requirements: 1 Xbox 360 console + Creators Club membership to test network code - run 1 instance of game on Xbox 360, and 1 on a Windows-based computer   Network Sessions ·        A network session is made up of players in a game + up to 8 arbitrary integer properties describing the session ·        create custom enums – (e.g. GameMode, SkillLevel) as keys in NetworkSessionProperties collection ·        Player state: lobby, in-play   Session Types ·        local session - for split-screen gaming - requires no network traffic. ·        system link session - connects multiple gaming machines over a local subnet. ·        Xbox LIVE multiplayer session - occurs on the Internet. Ranked or unranked   Session Updates ·        NetworkSession class Update method - must be called once per frame. ·        performs the following actions: o   Sends the network packets. o   Changes the session state. o   Raises the managed events for any significant state changes. o   Returns the incoming packet data. ·        synchronize the session à packet-received and state-change events à no threading issues   Session Config ·        Session host - gaming machine that creates the session. XNA handles host migration ·        NetworkSession properties: AllowJoinInProgress , AllowHostMigration ·        NetworkSession groups: AllGamers, LocalGamers, RemoteGamers   Subscribe to NetworkSession events ·        GamerJoined ·        GamerLeft ·        GameStarted ·        GameEnded – use to return to lobby ·        SessionEnded – use to return to title screen   Create a Session session = NetworkSession.Create(         NetworkSessionType.SystemLink,         maximumLocalPlayers,         maximumGamers,         privateGamerSlots,         sessionProperties );   Start a Session if (session.IsHost) {     if (session.IsEveryoneReady)     {        session.StartGame();        foreach (var gamer in SignedInGamer.SignedInGamers)        {             gamer.Presence.PresenceMode =                 GamerPresenceMode.InCombat;   Find a Network Session AvailableNetworkSessionCollection availableSessions = NetworkSession.Find(     NetworkSessionType.SystemLink,       maximumLocalPlayers,     networkSessionProperties); availableSessions.AllowJoinInProgress = true;   Join a Network Session NetworkSession session = NetworkSession.Join(     availableSessions[selectedSessionIndex]);   Sending Network Data var packetWriter = new PacketWriter(); foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Get the tank associated with this player.     Tank myTank = gamer.Tag as Tank;     // Write the data.     packetWriter.Write(myTank.Position);     packetWriter.Write(myTank.TankRotation);     packetWriter.Write(myTank.TurretRotation);     packetWriter.Write(myTank.IsFiring);     packetWriter.Write(myTank.Health);       // Send it to everyone.     gamer.SendData(packetWriter, SendDataOptions.None);     }   Receiving Network Data foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Keep reading while packets are available.     while (gamer.IsDataAvailable)     {         NetworkGamer sender;          // Read a single packet.         gamer.ReceiveData(packetReader, out sender);          if (!sender.IsLocal)         {             // Get the tank associated with this packet.             Tank remoteTank = sender.Tag as Tank;              // Read the data and apply it to the tank.             remoteTank.Position = packetReader.ReadVector2();             …   End a Session if (session.AllGamers.Count == 1)         {             session.EndGame();             session.Update();         }   Performance •        Aim to minimize payload, reliable in order messages •        Send Data Options: o   Unreliable, out of order -(SendDataOptions.None) o   Unreliable, in order (SendDataOptions.InOrder) o   Reliable, out of order (SendDataOptions.Reliable) o   Reliable, in order (SendDataOptions.ReliableInOrder) o   Chat data (SendDataOptions.Chat) •        Simulate: NetworkSession.SimulatedLatency , NetworkSession.SimulatedPacketLoss •        Voice support – NetworkGamer properties: HasVoice ,IsTalking , IsMutedByLocalUser

    Read the article

  • Type of computer for a developer on the road

    - by nabucosound
    Hi developers: I am planning to be traveling through eurasia and asia (russia, china, korea, japan, south east asia...) for a while and, although there are plenty of marvelous things to see and to do, I must keep on working :(. I am a python developer, dedicated mainly to web projects. I use django, sqlite3, browsers, and ocassionaly (only if I have no choice) I install postgres, mysql, apache or any other servers commonly used in the internets. I do my coding on vim, use ssh to connect, lftp to transfer files, IRC, grep/ack... So I spend most of my time in the terminal shells. But I also use IM or Skype to communicate with my clients and peers, as well as some other software (that after all is not mandatory for my day-to-day work). I currently work with a Macbook Pro (3 years old now) and so far I am very happy with the performance. But I don't want to carry it if I am going to be "on transit" for long time, it is simply huge and heavy for what I am planning to load in my rather small backpack (while traveling, less is more, you know). So here I am reading all kind of opinions about netbooks, because at first sight this is the kind of computer I thought I had to choose. I am going to use Linux for it, Microsoft is not my cup of tea and Mac is not available for them, unless I were to buy a Macbook air, something that I won't do because if I am robbed or rain/dust/truck loaders break it I would burst in tears. I am concerned about wifi performance and connectivity, I am going to use one of those linux distros/tools to hack/test on "open" networks (if you know what I mean) in case I am not in a place with real free wifi access and I find myself in an emergency. CPU speed should be acceptable, but since I don't plan to run Photoshop or expensive IDEs, I guess most of the time I won't be overloading the machine. Apart from this, maybe (surely) I am missing other features to consider. With that said (sorry about the length) here it comes my question, raised from a deep ignorance regarding the wars betweeb betbooks vs notebooks (I assume tablet PCs are not for programming yet): If I buy a netbook will I have to throw it away after 1 month on the road and buy a notebook? Or will I be OK? Thanks! Hector Update I have received great feedback so far! I would like to insist on the fact that I will be traveling through many different countries and scenarios. I am sure that while in Japan I will be more than fine with anything related to technology, connectivity, etc. But consider that I will be, for example, on a train through Russia (transsiberian) and will cross Mongolia as well. I will stay in friends' places sometimes, but most of the time I will have to work from hostel rooms, trains, buses, beaches (hey this last one doesn't sound too bad hehe!). I think some of your answers guys seem to focus on the geek part but loose the point of this "on the road" fact. I am very aware and agree that netbooks suck compared to notebooks, but what I am trying to do here is to find a balance and discover your experiences with netbooks to see first hand if a netbook will be a fail in the mid-long term of the trip for my purposes. So I have resumed the main concepts expressed here on this small list, in no particular order: keyboard/touchpad feel: I use vim so no need of moving mouse pointers that much, unless I am browsing the web, but intensive use of keyboard screen real state: again, terminal work for most of the time battery life: I think something very important weight/size: also very important looks not worth stealing it, don't give a shit if it is lost/stolen/broken: this may depend on kind of person, your economy, etc. Also to prevent losing work, I will upload EVERYTHING to the cloud whenever I'll have a chance. wifi: don't want to discover my wifi is one of those that cannot deal with half the routers on this planet or has poor connectivity. Thanks again for your answers and comments!

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

  • Graphics trouble after resuming from hibernate or suspend

    - by Voyagerfan5761
    I have a Dell Inspiron 2650 (with NVidia graphics, using nouveau drivers) that I'm using to try out Ubuntu. It's all great, except that Hibernate and Suspend aren't usable. Yes, I know that questions about power-save issues are rampant in the Linux support universe, but it seems that every time I find a solution it's for a very specific hardware combination and doesn't apply to me. So anyway, here goes. When I resume from either power-saving mode, I'll get graphics problems anywhere on the range from a few scattered random-colored pixels that won't change; all the way to full-screen patterns that don't change as I move the mouse, hit keys on the keyboard, or even bring up the shutdown dialog using the power button. Those full-screen issues (which may involve stripes with random pixels, partial black screen, or both) always end in me forcing the machine to shut down by holding the power button. I haven't done much testing yet to determine what severity level is most commonly associated with each mode, but I do avoid using either power-save option because of these issues. I'll add info on my hardware as I can gather it (no home internet connection, and this laptop is tethered to my desk by a dead battery and casing degradation). Please feel free to request something specific in the question comments. Hardware Info See this hardinfo report for my system's hardware configuration. (No, my username is not "myuser"; I sanitized hardinfo's output before publishing it.) Screenshots These screenshots are from a relatively mild occurrence, which happened after the second hibernation I took that session. The first one worked great, though I used the wireless card and Firefox heavily between the two hibernation attempts. Take a look at what happened when I opened my home directory in Nautilus and scrolled it: See below for the situations I've tested so far. The real trouble comes when the machine resumes to an unusable state; in such cases I can't even unlock the screen or properly reboot, much less take a screenshot. I have a hunch that putting a CD in the drive will cause such major failures, and I will try that at some point; see related question. Situations Tested Maverick (10.10) Suspend Seems to suspend nicely with nothing running Seems to suspend nicely with flash drive plugged in On resume from suspend with no flash drive, Terminal and gedit running: Funky graphics on top of log output, then blank screen with pixelated cursor; no response to power button (normally will shutdown 60 seconds later) Hibernate Seems to hibernate nicely with nothing running Seems to hibernate nicely with a few apps (Terminal, Mouse preferences) running Seems to not hibernate when flash drive plugged in Seems to not hibernate when System Monitor is running Have encountered failed hibernation (after several hours and one successful hibernate/thaw cycle) with no external media connected and no programs running except normal background stuff Natty LiveCD (11.04_2010-12-22) When I tested it, Natty wouldn't stay logged in. It played part of the login sound and then [ OK ] appeared in the top right corner (white-on-black terminal text) for a few seconds. Then it kicked me back to the Unlock screen. It did that four times before I gave up and just tested suspend from the Unlock screen. Suspend Resumed to vertical gray and black lines 2px (?) wide, then shifted to vertical "jail bars" of black over a black screen with above-described random pixels and mouse pointer. No apparent response to input from mouse (clicking randomly). Keyboard and touchpad unrecognized.

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Ubuntu 12.04 LXC nat prerouting not working

    - by petermolnar
    I have a running Debian Wheezy setup I copied exactly to an Ubuntu 12.04 ( elementary OS, used as desktop as well ) While the Debian setup runs flawlessly, the Ubuntu version dies on the prerouting to containers ( or so it seems ) In short: lxc works containers work and run connecting to container from host OK ( including mixed ports & services ) connecting to outside world from container is fine What does not work is connecting from another box to the host on a port that should be NATed to a container. The setups: /etc/rc.local CMD_BRCTL=/sbin/brctl CMD_IFCONFIG=/sbin/ifconfig CMD_IPTABLES=/sbin/iptables CMD_ROUTE=/sbin/route NETWORK_BRIDGE_DEVICE_NAT=lxc-bridge HOST_NETDEVICE=eth0 PRIVATE_GW_NAT=192.168.42.1 PRIVATE_NETMASK=255.255.255.0 PUBLIC_IP=192.168.13.100 ${CMD_BRCTL} addbr ${NETWORK_BRIDGE_DEVICE_NAT} ${CMD_BRCTL} setfd ${NETWORK_BRIDGE_DEVICE_NAT} 0 ${CMD_IFCONFIG} ${NETWORK_BRIDGE_DEVICE_NAT} ${PRIVATE_GW_NAT} netmask ${PRIVATE_NETMASK} promisc up Therefore lxc network is 192.168.42.0/24 and the host eth0 ip is 192.168.13.100; setup via network manager as static address. iptables: *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT *filter :FORWARD ACCEPT [0:0] :INPUT DROP [0:0] :OUTPUT ACCEPT [0:0] # Accept traffic from internal interfaces -A INPUT -i lo -j ACCEPT # accept traffic from lxc network -A INPUT -d 192.168.42.1 -s 192.168.42.0/24 -j ACCEPT # Accept internal traffic Make sure NEW incoming tcp connections are SYN # packets; otherwise we need to drop them: -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Packets with incoming fragments drop them. This attack result into Linux server panic such data loss. -A INPUT -f -j DROP # Incoming malformed XMAS packets drop them: -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Incoming malformed NULL packets: -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Accept traffic with the ACK flag set -A INPUT -p tcp -m tcp --tcp-flags ACK ACK -j ACCEPT # Allow incoming data that is part of a connection we established -A INPUT -m state --state ESTABLISHED -j ACCEPT # Allow data that is related to existing connections -A INPUT -m state --state RELATED -j ACCEPT # Accept responses to DNS queries -A INPUT -p udp -m udp --dport 1024:65535 --sport 53 -j ACCEPT # Accept responses to our pings -A INPUT -p icmp -m icmp --icmp-type echo-reply -j ACCEPT # Accept notifications of unreachable hosts -A INPUT -p icmp -m icmp --icmp-type destination-unreachable -j ACCEPT # Accept notifications to reduce sending speed -A INPUT -p icmp -m icmp --icmp-type source-quench -j ACCEPT # Accept notifications of lost packets -A INPUT -p icmp -m icmp --icmp-type time-exceeded -j ACCEPT # Accept notifications of protocol problems -A INPUT -p icmp -m icmp --icmp-type parameter-problem -j ACCEPT # Respond to pings, but limit -A INPUT -m icmp -p icmp --icmp-type echo-request -m state --state NEW -m limit --limit 6/s -j ACCEPT # Allow connections to SSH server -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m limit --limit 12/s -j ACCEPT COMMIT *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A PREROUTING -d 192.168.13.100 -p tcp -m tcp --dport 2221 -m state --state NEW -m limit --limit 12/s -j DNAT --to-destination 192.168.42.11:22 -A PREROUTING -d 192.168.13.100 -p tcp -m tcp --dport 80 -m state --state NEW -m limit --limit 512/s -j DNAT --to-destination 192.168.42.11:80 -A PREROUTING -d 192.168.13.100 -p tcp -m tcp --dport 443 -m state --state NEW -m limit --limit 512/s -j DNAT --to-destination 192.168.42.11:443 -A POSTROUTING -d 192.168.42.0/24 -o eth0 -j SNAT --to-source 192.168.13.100 -A POSTROUTING -o eth0 -j MASQUERADE COMMIT sysctl: net.ipv4.conf.all.forwarding = 1 net.ipv4.conf.all.mc_forwarding = 0 net.ipv4.conf.default.forwarding = 1 net.ipv4.conf.default.mc_forwarding = 0 net.ipv4.ip_forward = 1 I've set up full iptables log on the container; none of the packets addressed to 192.168.13.100, port 80 is reaching the container. I've even tried different kernels ( server kernel, raring lts kernel, etc ), modprobe everything iptables & nat related, nothing. Any ideas?

    Read the article

  • I am trying to figure out the best way to understand how to cache domain objects

    - by Brett Ryan
    I've always done this wrong, I'm sure a lot of others have too, hold a reference via a map and write through to DB etc.. I need to do this right, and I just don't know how to go about it. I know how I want my objects to be cached but not sure on how to achieve it. What complicates things is that I need to do this for a legacy system where the DB can change without notice to my application. So in the context of a web application, let's say I have a WidgetService which has several methods: Widget getWidget(); Collection<Widget> getAllWidgets(); Collection<Widget> getWidgetsByCategory(String categoryCode); Collection<Widget> getWidgetsByContainer(Integer parentContainer); Collection<Widget> getWidgetsByStatus(String status); Given this, I could decide to cache by method signature, i.e. getWidgetsByCategory("AA") would have a single cache entry, or I could cache widgets individually, which would be difficult I believe; OR, a call to any method would then first cache ALL widgets with a call to getAllWidgets() but getAllWidgets() would produce caches that match all the keys for the other method invocations. For example, take the following untested theoretical code. Collection<Widget> getAllWidgets() { Entity entity = cache.get("ALL_WIDGETS"); Collection<Widget> res; if (entity == null) { res = loadCache(); } else { res = (Collection<Widget>) entity.getValue(); } return res } Collection<Widget> loadCache() { // Get widgets from underlying DB Collection<Widget> res = db.getAllWidgets(); cache.put("ALL_WIDGETS", res); Map<String, List<Widget>> byCat = new HashMap<>(); for (Widget w : res) { // cache by different types of method calls, i.e. by category if (!byCat.containsKey(widget.getCategory()) { byCat.put(widget.getCategory(), new ArrayList<Widget>); } byCat.get(widget.getCatgory(), widget); } cacheCategories(byCat); return res; } Collection<Widget> getWidgetsByCategory(String categoryCode) { CategoryCacheKey key = new CategoryCacheKey(categoryCode); Entity ent = cache.get(key); if (entity == null) { loadCache(); } ent = cache.get(key); return ent == null ? Collections.emptyList() : (Collection<Widget>)ent.getValue(); } NOTE: I have not worked with a cache manager, the above code illustrates cache as some object that may hold caches by key/value pairs, though it's not modelled on any specific implementation. Using this I have the benefit of being able to cache all objects in the different ways they will be called with only single objects on the heap, whereas if I were to cache the method call invocation via say Spring It would (I believe) cache multiple copies of the objects. I really wish to try and understand the best ways to cache domain objects before I go down the wrong path and make it harder for myself later. I have read the documentation on the Ehcache website and found various articles of interest, but nothing to give a good solid technique. Since I'm working with an ERP system, some DB calls are very complicated, not that the DB is slow, but the business representation of the domain objects makes it very clumsy, coupled with the fact that there are actually 11 different DB's where information can be contained that this application is consolidating in a single view, this makes caching quite important.

    Read the article

  • Kinect losing tracked players with Beta2 SDK

    - by Eric B
    So i'm creating a game using the Beta2 SDK for Kinect. The issue i am having is that in the middle of gameplay if another person enters the Kinects FOV it stops tracking the player and will not track anyone else for several minutes. Same deal if the player leaves the FOV and reenters it. Here is what im using to detect players. void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) { int playersAlive = 0; // reset lists skeletons = new Dictionary<int, SkeletonData>(); //create a new list for skeletons menuSkeleton = new List<SkeletonData>(); initialPlayers = new Dictionary<float, SkeletonData>(); //create a new list for initialPlayers foreach (SkeletonData s in e.SkeletonFrame.Skeletons) //for each skeleton the kinect has detected { if (s.TrackingState == SkeletonTrackingState.Tracked) // players found { menuSkeleton.Add(s); if (initialized) // after initialization { skeletons.Add(s.TrackingID, s); } else // before initialization initialPlayers.Add(s.Joints[JointID.ShoulderCenter].Position.X, s); //if we are not initialized then add this player to the inital player list. playersAlive++; } } if (playersAlive == TOTAL_PLAYERS_ALLOWED) // If there is one player { if (!inMiniGame) // Before the game starts gameStart = DateTime.Now; // Reset initialization timer if (!initialized) // Before initialization // NOTE TO SELF I TOOK OUT && inMenu { InitializePlayers(); if (DateTime.Now.Subtract(gameStart).TotalMilliseconds > INITIALIZATION_WAIT_TIME) { initialized = true; // initialize timers from fixed starting time if (inMiniGame) //if the game has started { gamePause = gameStart; //TODO ERIC: Initialize any Timers Here } } } } } /// <summary> /// this function initializes the players adding them to a list /// and making one of the players the menu controller, for LIM we will need to change the code so that the /// game only recognizes and supports one player at a time /// variable names will need to be change as well. /// </summary> private void InitializePlayers() { List<float> initialPos = new List<float>(); // used to track starting positions players = new Dictionary<int, Player>(); foreach (float pos in initialPlayers.Keys) { initialPos.Add(pos); //add position of each inital player to list } float first = initialPos[0]; // left player first, right second Player player = new Player(initialPlayers[first].TrackingID, true); player.PlayerNumber = PLAYER_ONE; player.Skeleton = initialPlayers[first]; player.Specifics = new PlayerSpecifics(player.PlayerNumber); player.Specifics.PauseTimer = gameStart; players.Add(initialPlayers[first].TrackingID, player); menuController = initialPlayers[first].TrackingID; //menu controller is player 1 } This is a one player game. Also when the game starts Initialize is set to false, and gets set to true when i go from the games menu into the gameplay. So can anyone see any issues with this code block that would cause the kinect to lose players as they enter/exit the FOV? and not re-track them? Thank you for any help.

    Read the article

  • How to upgrade a remote server from 8.10 to newer version?

    - by DisgruntledGoat
    I have a remote server still running Ubuntu 8.10 9.04 that I can only access via SSH. If I run apt-get update I get a bunch of 404 errors on the packages. I've asked a few questions on Server Fault but got nowhere. Here's what I've done: Run apt-get update which returns errors like: Err http://gb.archive.ubuntu.com intrepid/main Packages 404 Not Found [and same for many other packages] Run do-release-upgrade which returns: Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Edited /etc/update-manager/release-upgrades and changed from Prompt=normal to Prompt=lts (as suggested here). Running do-release-upgrade after this returns: Checking for a new ubuntu release current dist not found in meta-release file No new release found (Updated) I have followed the advice in this question and changed /etc/apt/sources.list to refer to jaunty instead of intrepid. However, that distro is not online anymore either. A comment there says I have to upgrade in chronological order... So basically, it seems like I cannot upgrade because my current distro is out of date and not supported. Is there a way to upgrade direct to 10.x or 11.x? Note, as this is a server I only have command-line access. UPDATE 24/11: I have managed to upgrade from 8.10 to 9.04. Ubuntu's EOL Upgrades page provides some alternate URLs for apt sources. I also needed to update /var/lib/update-manager/meta-release to point to the old-releases server too. However, now I cannot upgrade from 9.04 to 9.10. Running do-release-upgrade produces the same error as #2 above, except it "Failed to fetch" (the URLs in meta-release are valid). The Ubuntu Jaunty upgrade page says it's necessary to upgrade using a CD image. I followed the instructions here, but it didn't work: A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade is now aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmp.JLhTwVUugb/karmic", line 7, in sys.exit(main()) File "/tmp/tmp.JLhTwVUugb/DistUpgradeMain.py", line 132, in main if app.run(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1590, in run return self.fullUpgrade() File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1506, in fullUpgrade if not self.doPostInitialUpdate(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 762, in doPostInitialUpdate self.quirks.run("PostInitialUpdate") File "/tmp/tmp.JLhTwVUugb/DistUpgradeQuirks.py", line 83, in run for plugin in self.plugin_manager.get_plugins(condition): File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 167, in get_plugins filenames = self.get_plugin_files() File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 120, in get_plugin_files basenames = [x for x in os.listdir(dirname) OSError: [Errno 2] No such file or directory: './plugins' It does say to report the bug, but since this is an old unsupported release I don't know if it's worth doing. However, is there a way round this, to upgrade from 9.04 to 9.10 (And then finally to 10.04 LTS.)

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • Intern Screening - Software 'Quiz'

    - by Jeremy1026
    I am in charge of selecting a new software development intern for a company that I work with. I wanted to throw a little 'quiz' at the applicants before moving forth with interviews so as to weed out the group a little bit to find some people that can demonstrate some skill. I put together the following quiz to send to applicants, it focuses only on PHP, but that is because that is what about 95% of the work will be done in. I'm hoping to get some feedback on A. if its a good idea to send this to applicants and B. if it can be improved upon. # 1. FizzBuzz # Write a small application that does the following: # Counts from 1 to 100 # For multiples of 3 output "Fizz" # For multiples of 5 output "Buzz" # For multiples of 3 and 5 output "FizzBuzz" # For numbers that are not multiples of 3 nor 5 output the number. <?php ?> # 2. Arrays # Create a multi-dimensional array that contains # keys for 'id', 'lot', 'car_model', 'color', 'price'. # Insert three sets of data into the array. <?php ?> # 3. Comparisons # Without executing the code, tell if the expressions # below will return true or false. <?php if ((strpos("a","abcdefg")) == TRUE) echo "True"; else echo "False"; //True or False? if ((012 / 4) == 3) echo "True"; else echo "False"; //True or False? if (strcasecmp("abc","ABC") == 0) echo "True"; else echo "False"; //True or False? ?> # 4. Bug Checking # The code below is flawed. Fix it so that the code # runs properly without producing any Errors, Warnings # or Notices, and returns the proper value. <?php //Determine how many parts are needed to create a 3D pyramid. function find_3d_pyramid($rows) { //Loop through each row. for ($i = 0; $i < $rows; $i++) { $lastRow++; //Append the latest row to the running total. $total = $total + (pow($lastRow,3)); } //Return the total. return $total; } $i = 3; echo "A pyramid consisting of $i rows will have a total of ".find_3d_pyramid($i)." pieces."; ?> # 5. Quick Examples # Create a small example to complete the task # for each of the following problems. # Create an md5 hash of "Hello World"; # Replace all occurances of "_" with "-" in the string "Welcome_to_the_universe." # Get the current date and time, in the following format, YYYY/MM/DD HH:MM:SS AM/PM # Find the sum, average, and median of the following set of numbers. 1, 3, 5, 6, 7, 9, 10. # Randomly roll a six-sided die 5 times. Store the 5 rolls into an array. <?php ?>

    Read the article

  • How can I work efficiently on a desktop sharing workflow?

    - by OSdave
    I am a freelance Magento developer, based in Spain. One of my clients is a Germany based web development company and they're asking me something I think it's impossible. OK, maybe not impossible but definitely not a preferred way of doing things. One of their clients has a Magento Entreprise installation, which is the paid (and I think proprietary) version of Magento. Their client has forbidden them to download the files from his server. My client is asking me now to study one particular module of the application in order to interact with it from a custom module I'll have to develop. As they have a read-only ssh access to their client's server, they came up with this solution: Set up a desktop/screen sharing session between one of their developer's station and mine, alongsides with a skype conversation. Their idea is that I'll say to the developer: show me file foo.php The developer will then open this foo.php file in his IDE. I'll have then to ask him to show me the bar method, the parent class, etc... Remember that it's a read-only session, so forget about putting a Zend_Debug::log() anywhere, and don't even think about a xDebug breakpoint (they don't use any kind of debugger, sic). Their client has also forbidden them to use any version control system... My first reaction when they explained to me this was (and I actually did say it outloud to them): Well, find another client. but they took it as a joke from me. I understand that in a business point of view rejecting a client is not a good practice, but I think that the condition of this assignment make it impossible to complete. At least according to my workflow. I mean, the way I work or learn a new framework/program is: download all files and copy of db on my pc create a git repository and a branch run the application locally use breakpoints use Zend_Debug::log() write the code and tests commit to git repo upload to (test/staging first if there is one, production if not) server I have agreed to try the desktop sharing session, although I think it will be a waste of time. On one hand I don't mind, they pay me for that time, but I know me and I don't like the sensation of loosing my time. On the other hand, I have other clients for whom I can work according to my workflow. I am about to say to them that I cannot (don't want to) do it. Well, I'll first try this desktop sharing session: maybe I'm wrong and it can actually work. But I like to consider myself as a professional, and I know that I don't know everything. So I try to keep an open mind and I am always willing to learn new stuff. So my questions are: Can this desktop-sharing workflow work? What should be done in order to take the most of it? Taking into account all the obstacles (geographic locations, no local, no git), is there another way for me to work on that project?

    Read the article

  • glutPostRedisplay() does not update display

    - by A D
    I am currently drawing a rectangle to the screen and would like to move it by using the arrow keys. However, when I press an arrow key the vertex data changes but the display does refresh to reflect these changes, even though I am calling glutPostRedisplay(). Is there something else that I must do? My code: #include <GL/glew.h> #include <GL/freeglut.h> #include <GL/freeglut_ext.h> #include <iostream> #include "Shaders.h" using namespace std; const int NUM_VERTICES = 6; const GLfloat POS_Y = -0.1; const GLfloat NEG_Y = -0.01; struct Vertex { GLfloat x; GLfloat y; Vertex() : x(0), y(0) {} Vertex(GLfloat givenX, GLfloat givenY) : x(givenX), y(givenY) {} }; Vertex left_paddle[NUM_VERTICES]; void init() { glClearColor(1.0f, 1.0f, 1.0f, 0.0f); left_paddle[0] = Vertex(-0.95f, 0.95f); left_paddle[1] = Vertex(-0.95f, 0.0f); left_paddle[2] = Vertex(-0.85f, 0.95f); left_paddle[3] = Vertex(-0.85f, 0.95f); left_paddle[4] = Vertex(-0.95f, 0.0f); left_paddle[5] = Vertex(-0.85f, 0.0f); GLuint vao; glGenVertexArrays( 1, &vao ); glBindVertexArray( vao ); GLuint buffer; glGenBuffers(1, &buffer); glBindBuffer(GL_ARRAY_BUFFER, buffer); glBufferData(GL_ARRAY_BUFFER, sizeof(left_paddle), NULL, GL_STATIC_DRAW); GLuint program = init_shaders( "vshader.glsl", "fshader.glsl" ); glUseProgram( program ); GLuint loc = glGetAttribLocation( program, "vPosition" ); glEnableVertexAttribArray( loc ); glVertexAttribPointer( loc, 2, GL_FLOAT, GL_FALSE, 0, 0); glBindVertexArray(vao); } void movePaddle(Vertex* array, GLfloat change) { for(int i = 0; i < NUM_VERTICES; i++) { array[i].y = array[i].y + change; } glutPostRedisplay(); } void special( int key, int x, int y ) { switch ( key ) { case GLUT_KEY_DOWN: movePaddle(left_paddle, NEG_Y); break; } } void display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDrawArrays(GL_TRIANGLES, 0, 6); glutSwapBuffers(); } int main(int argc, char **argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB); glutInitWindowSize(500,500); glutCreateWindow("Rectangle"); glewInit(); init(); glutDisplayFunc(display); glutSpecialFunc(special); glutMainLoop(); return 0; }

    Read the article

  • Z axis trouble with glTranslatef(...) - LWJGL

    - by Zarkopafilis
    Here is the code: private static boolean up = true , down = false , left = false , right = false, reset = false, in = false , out = false; public void start() { try { Display.setDisplayMode(new DisplayMode(800,600)); Display.create(); } catch (LWJGLException e) { e.printStackTrace(); System.exit(0); } GL11.glMatrixMode(GL11.GL_PROJECTION); GL11.glLoadIdentity(); GL11.glOrtho(0, 800, 0, 600, 0.00001f, 1000); GL11.glMatrixMode(GL11.GL_MODELVIEW); Keyboard.enableRepeatEvents(true); while (!Display.isCloseRequested()) { GL11.glClear(GL11.GL_COLOR_BUFFER_BIT); input(); if(up){ GL11.glTranslatef(0,0.1f,0); } if(down){ GL11.glTranslatef(0,-0.1f,0); } if(left){ GL11.glTranslatef(-0.1f,0,0); } if(right){ GL11.glTranslatef(0.1f,0,0); } if(in){ GL11.glTranslatef(0, 0, 1f); } if(out){ GL11.glTranslatef(0, 0, -1f); } if(reset){ GL11.glLoadIdentity(); } GL11.glBegin(GL11.GL_QUADS); GL11.glColor3f(255, 255, 255); GL11.glVertex3f(800/2, 600/2, 0); GL11.glVertex3f(800/2 + 200, 600/2, 0); GL11.glVertex3f(800/2 + 200, 600/2 + 200, 0); GL11.glVertex3f(800/2, 600/2 + 200, 0); GL11.glColor3f(0, 255, 0); GL11.glVertex3f(800/2, 600/2, 1); GL11.glVertex3f(800/2 + 200, 600/2, 1); GL11.glVertex3f(800/2 + 200, 600/2 + 200, 1); GL11.glVertex3f(800/2, 600/2 + 200, 1); GL11.glEnd(); Display.update(); } Display.destroy(); } public static void main(String[] argv){ new main().start(); } public void input(){ up = false; down = false; left = false; right = false; reset = false; in = false; out = false; if(Keyboard.isKeyDown(Keyboard.KEY_SPACE)){ reset = true; } if(Keyboard.isKeyDown(Keyboard.KEY_W)){ up = true; } if(Keyboard.isKeyDown(Keyboard.KEY_S)){ down = true; } if(Keyboard.isKeyDown(Keyboard.KEY_A)){ left = true; } if(Keyboard.isKeyDown(Keyboard.KEY_D)){ right = true; } if(Keyboard.isKeyDown(Keyboard.KEY_Q)){ in = true; } if(Keyboard.isKeyDown(Keyboard.KEY_E)){ out = true; } } As you can see I am creating 2 quads , a white one at z 0 and a green one at z 1. WASD keys function correctly. Also when I hit SPACEBAR the white quad is being shown. If I then press E , I can see the green quad. But if I press Q afterwards , I dont see the white one again!(Space Works everytime). Also if I render the green one at Z = -1 everything works perfectly BUT you may need up to 3 key presses Q/E to see the other quad. Why is that happening?

    Read the article

  • AWS S3 status code 403 SignatureDoesNotMatch - Check your key and signing method

    - by Matt
    i have checked a rechecked my keys and they are correct but i keep the below Exception whenever i try to upload, can anyone shed some light on this problem? 11-01 09:21:26.331: W/System.err(13934): AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: B97CC81E13D81XXX, AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID: PPieuQpqElIizNBQPc42JwC4WnBkUciCKRT5S4HSftBraeacN8y0lKfsVP9LXXXX 11-01 09:21:26.331: W/System.err(13934): at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:659) 11-01 09:21:26.331: W/System.err(13934): at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:330) 11-01 09:21:26.331: W/System.err(13934): at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:182) 11-01 09:21:26.331: W/System.err(13934): at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3038) 11-01 09:21:26.331: W/System.err(13934): at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1218) 11-01 09:21:26.331: W/System.err(13934): at com.apprssd.capsule.S3UploaderActivity$S3PutObjectTask.doInBackground(S3UploaderActivity.java:165) 11-01 09:21:26.331: W/System.err(13934): at com.apprssd.capsule.S3UploaderActivity$S3PutObjectTask.doInBackground(S3UploaderActivity.java:1) 11-01 09:21:26.331: W/System.err(13934): at android.os.AsyncTask$2.call(AsyncTask.java:287) 11-01 09:21:26.331: W/System.err(13934): at java.util.concurrent.FutureTask.run(FutureTask.java:234) 11-01 09:21:26.341: W/System.err(13934): at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) 11-01 09:21:26.341: W/System.err(13934): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) 11-01 09:21:26.341: W/System.err(13934): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) 11-01 09:21:26.341: W/System.err(13934): at java.lang.Thread.run(Thread.java:841) It happens during the below. PutObjectRequest por = new PutObjectRequest( Constants.getDataBucket(), selectedZip.toString(), selectedZip); s3Client.putObject(por);

    Read the article

  • HTML Agility Pack - ReplaceNode doesn't change the InnerHTML of the Body

    - by morsanu
    Hi there, I have this The body: <body><p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent leo leo, ultrices eu venenatis et, rutrum fringilla dolor.</p></body> The code: HtmlNode body = doc.DocumentNode.SelectSingleNode("//body"); Dictionary<HtmlNode, HtmlNode> toReplace = new Dictionary<HtmlNode, HtmlNode>(); // I do some logic here adding nodes to the toReplace dictionary. foreach (HtmlNode replaceNode in toReplace.Keys) { replaceNode.ParentNod.ReplaceChild(toReplace[replaceNode], replaceNode); } After i do this, the InnerHtml of the body node remains the same as from beginning, although the OutterHtml or the InnerText are showing the good result. Is there something wrong with my code? The result: // body.InnerHtml <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent leo leo, ultrices eu venenatis et, rutrum fringilla dolor.</p> // body.OutterHtml <body><p>Lorem ipsum dolor sit amet...</p></body>

    Read the article

  • Random pauses with "Key dispatching timed out sending to <null>" when closing Android SurfaceView

    - by kenyee
    When I close an Android SurfaceView Activity, the app sometimes pauses and almost gets an ANR (Application Not Responding) message. Looking at LogCat, it appears to be timing out trying to send keys to it. I tried modifying the code to setFocusable(false) when in the onKeyDown handler for the SurfaceView, but that doesn't seem to affect this. Any other ideas on what might be causing this? Or what these messages in Logcat even mean? ============================= 05-31 19:35:35.285: INFO/WindowManager(586): focus null mToken is null at event dispatch! 05-31 19:35:35.295: WARN/WindowManager(586): Current state: {{KeyEvent{action=1 code=4 repeat=0 meta=0 scancode=158 mFlags=8} to null @ 1275334535292 lw=null lb=null fin=true gfw=true ed=true tts=0 wf=false fp=false mcf=null}} 05-31 19:35:35.305: WARN/WindowManager(586): Continuing to wait for key to be dispatched 05-31 19:35:40.306: WARN/WindowManager(586): Key dispatching timed out sending to 05-31 19:35:40.316: WARN/WindowManager(586): Dispatch state: {{KeyEvent{action=0 code=4 repeat=0 meta=0 scancode=158 mFlags=8} to Window{43763540 com.myapp/com.myapp.DiagramEdit paused=false} @ 1275334499512 lw=Window{43763540 com.myapp/com.myapp.DiagramEdit paused=false} lb=android.os.BinderProxy@43702190 fin=false gfw=true ed=true tts=0 wf=false fp=false mcf=null}} 05-31 19:35:40.326: INFO/WindowManager(586): focus null mToken is null at event dispatch! 05-31 19:35:40.326: WARN/WindowManager(586): Current state: {{KeyEvent{action=1 code=4 repeat=0 meta=0 scancode=158 mFlags=8} to null @ 1275334540327 lw=null lb=null fin=true gfw=true ed=true tts=0 wf=false fp=false mcf=null}} 05-31 19:35:40.326: WARN/WindowManager(586): Continuing to wait for key to be dispatched

    Read the article

  • WinForms ComboBox DropDown and Autocomplete window both appear

    - by Clyde
    I've got a combobox on a winforms app with this code: comboBox1.AutoCompleteMode = AutoCompleteMode.SuggestAppend; comboBox1.AutoCompleteSource = AutoCompleteSource.ListItems; DataTable t = new DataTable(); t.Columns.Add("ID", typeof(int)); t.Columns.Add("Display", typeof(string)); for (int i = 1; i < 2000; i++) { t.Rows.Add(i, i.ToString("N0")); } comboBox1.DataSource = t; comboBox1.ValueMember = "ID"; comboBox1.DisplayMember = "Display"; I then follow these steps when the window opens: Click the combobox drop down button -- this displays the list of items and selects the text in the combobox Type '5', '1' ... i.e. I'm looking to use autocomplete to search for 515, 516, etc. You'll see that the autocomplete window now appears ON TOP of the drop down list. However if I mouse over, it's the obscured drop down window behind the autocomplete window that's receiving the mouse events, including the click. So I think I'm clicking on an autocomplete item but actually clicking on something totally random that I can't see. Is this a bug in the ComboBox? I'm using Windows 7 if that matters. Am I configuring the ComboBox wrong somehow? Note also that using the KEYBOARD uses the autocomplete drop down. So up/down arrow keys are using the front window, but the mouse is using the back window.

    Read the article

  • Return template as string - Django

    - by Ninefingers
    Hi All, I'm still not sure this is the correct way to go about this, maybe not, but I'll ask anyway. I'd like to re-write wordpress (justification: because I can) albeit more simply myself in Django and I'm looking to be able to configure elements in different ways on the page. So for example I might have: Blog models A site update message model A latest comments model. Now, for each page on the site I want the user to be able to choose the order of and any items that go on it. In my thought process, this would work something like: class Page(models.Model) Slug = models.CharField(max_length=100) class PageItem(models.Model) Page = models.ForeignKey(Page) ItemType = models.CharField(max_length=100) InstanceNum = models.IntegerField() # all models have primary keys. Then, ideally, my template would loop through all the PageItems in a page which is easy enough to do. But what if my page item is a site update as opposed to a blog post? Basically, I am thinking I'd like to pull different item types back in different orders and display them using the appropriate templates. Now, I thought one way to do this would be to, in views.py, to loop through all of the objects and call the appropriate view function, return a bit of html as a string and then pipe that into the resultant template. My question is - is this the best way to go about doing things? If so, how do I do it? If not, which way should I be going? I'm pretty new to Django so I'm still learning what it can and can't do, so please bear with me. I've checked SO for dupes and don't think this has been asked before...

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >