Search Results

Search found 10677 results on 428 pages for 'mod status'.

Page 224/428 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • Shell script issue: cron job script to Restart MySQL server when it stops accidentally

    - by Straw Hat
    I have this script, I am using it to setup CRON job to execute this script, so it can check if MySQL service is running; if not then it restart the MySQL service: #!/bin/bash service mysql status| grep 'mysql start/running' > /dev/null 2>&1 if [ $? != 0 ] then sudo service mysql restart fi I have setup cron job as. sudo crontab -e and then added, */1 * * * * /home/ubuntu/mysql-check.sh Problem is that it restart MySQL on every cron job execution.. even if server is running it restart the MySQL service what is correction in the script to do that.

    Read the article

  • Importing data from text file to specific columns using BULK INSERT

    - by Dinesh Asanka
    Bulk insert is much faster than using other techniques such as  SSIS. However, when you are using bulk insert you can’t insert to specific columns. If, for example, there are five columns in a table you should have five values for each record in the text file you are importing from. This is an issue when you are expecting default values to be inserted into tables. Let us say you have table as below: In this table, you are expecting ID, Status and CreatedDate to be updated automatically, so your text file may only have   FirstName  LastName  values as below: Dinesh,Asanka Saman,Liyanage Ruwan,Silva Susantha,Bathige Jude,Peires Sanjeewa,Jayawickrama If you use bulk insert to this table like follows, You will be returned an error: Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (ID). To avoid this you will need to create a view with the columns you are expecting to fill and use bulk insert against it. If you check the table now, you will see table with values in the text file and the default values.

    Read the article

  • Howto install google-mock on Ubuntu 12.10

    - by user1459339
    I am having hard time trying to install Google C++ Mocking Framework. I have successfully run sudo apt-get install google-mock. Then I tried to compile this sample file #include "gmock/gmock.h" int main(int argc, char** argv) { ::testing::InitGoogleMock(&argc, argv); return RUN_ALL_TESTS(); } with g++ -lgmock main.cpp and these errors have shown main.cpp:(.text+0x1e): undefined reference to `testing::InitGoogleMock(int*, char**)' main.cpp:(.text+0x23): undefined reference to `testing::UnitTest::GetInstance()' main.cpp:(.text+0x2b): undefined reference to `testing::UnitTest::Run()' collect2: error: ld returned 1 exit status I guess the linker can not find the library files. Does anybody know how to fix this?

    Read the article

  • MYSQL – Identifying Current Version of MySQL Server Installation – Part 2

    - by Pinal Dave
    Earlier I wrote an article about Detecting Current Version of MySQL Server Installation. After the post quite a few emails I received where various users suggested that there are many more ways to figure out the version of MySQL. Here are few of the methods which I received in the email. Method 1: This method retrieves value with the help of Information Functions. SELECT VERSION(); Method 2: This method is very similar to SQL Server. SELECT @@Version Method 3: You can connect to MySQL with command prompt and type following command: STATUS; Method 4: Please refer my earlier blog post. SHOW VARIABLES LIKE "%version%"; Let me know if you know any more method and I will extend this blog post. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Guest Post: Instantiate SharePoint Workflow On Item Deleted

    - by Brian Jackett
    In this post, guest author Lucas Eduardo Silva will walk you through the steps of instantiating a workflow using an item event receiver from a custom list.  The ItemDeleting event will require approval via the workflow. Foreword     As you may have read recently, I injured my right hand and have had it in a cast for the past 3 weeks.  Due to this I planned to reduce my blogging while my hand heals.  As luck would have it, I was actually approached by someone who asked if they could be a guest author on my blog.  I’ve never had a guest author, but considering my injury now seemed like as good a time as ever to try it out. About the Guest Author     Lucas Eduardo Silva (email) works for CPM Braxis, a sibling company to my employer Sogeti in the CapGemini family.  Lucas and I exchanged emails a few times after one of my  recent posts and continued into various topics.  When I posted that I had injured my hand, Lucas mentioned that he had a post idea that he would like to publish and asked if it could be published on my blog.  The below content is the result of that collaboration. The Problem     Lucas has a big problem.  He has a workflow that he wants to fire every time an item is deleted from a custom list. He has already created the association in the "item deleting event", but needs to approve the deletion but the workflow is finishing first. Lucas put an onWorkflowItemChanged wait for the change of status approval, but it is not being hit. The Solution Note: This solution assumes you have the Visual Studio Extensions for Windows SharePoint Services (VSeWSS) installed to access the SharePoint project templates within VIsual Studio. 1 - Create a workflow that will be activated by ItemEventReceiver. 2 - Create the list by Visual Studio clicking in File -> New -> Project. Select SharePoint, then List Definition. 3 - Select the type of document to be created. List, Document Library, Wiki, Tasks, etc.. 4 - Visual Studio creates the file ItemEventReceiver.cs with all possible events in a list. 5 – In the workflow project, open the workflow.xml and copy the ID. 6 - Uncomment the ItemDeleting and insert the following code by replacing the ID that you copied earlier.   //Cancel the Exclusion properties.Cancel = true;   //Activating Exclusion Workflow SPWorkflowManager workflowManager = properties.ListItem.Web.Site.WorkflowManager;   SPWorkflowAssociation wfAssociation = properties.ListItem.ParentList.WorkflowAssociations. GetAssociationByBaseID(new Guid("37b5aea8-792a-4ded-be25-d283d9fe1f9d"));   workflowManager.StartWorkflow(properties.ListItem, wfAssociation, wfAssociation.AssociationData, true);   properties.Status = SPEventReceiverStatus.CancelNoError;   7 - properties.Cancel cancels the event being activated and executes the code that is inside the event. In the example, it cancels the deletion of the item to start the workflow that will be active as an association list with the workflow ID. 8 - Create and deploy the workflow and the list for SharePoint. 9 - Create a list through the model that was created. 10 - Enable the workflow in the list and Congratulations! Every time you try to delete the item the workflow is activated. TIP: If you really want to delete the item after the workflow is done you will have to delete the item by the workflow.   this.workflowProperties.Site.AllowUnsafeUpdates = true; this.workflowProperties.Item.Delete(); this.workflowProperties.List.Update();   Conclusion     In this guest post Lucas took you through the steps of creating an item deletion approval workflow with an event receiver.  This was also the first time I’ve had a guest author on this blog.  Many thanks to Lucas for putting together this content and offering it.  I haven’t decided how I’d handle future guest authors, mostly because I don’t know if there are others who would want to submit content.  If you do have something that you would like to guest author on my blog feel free to drop me a line and we can discuss.  As a disclaimer, there are no guarantees that it will be published though.  For now enjoy Lucas’ post and look for my return to regular blogging soon.         -Frog Out   <Update 1> If you wish to contact Lucas you can reach him at [email protected] </Update 1>

    Read the article

  • Defining the Features we would like to see

    - by Patrick Liekhus
    OK, now that we have a very rough idea of what we are building, let’s get a list of the top features that this application needs to allow us to do.  In this next list we are not prioritizing them yet, just getting on paper the high level backlog of items that this system must do. Add a new task to my work queue Change the status of the task Print a hard copy of the task list by day for my records Log a phone conversation A manager should be able to assign tasks to another user How do we login? Change the Covey roles per user Manage the statuses used Manage the Covey quadrants Can we make this available on the following user interfaces? Windows Desktop Web Browser Sliverlight (WPF) Excel Add-in Outlook Add-in Android Devices iPhone Devices Windows Mobile Devices Blackberry Devices While this looks like a simple spread sheet, it can get pretty complex and busy quickly.  Next time we will work on making this into a Product Backlog and prioritizing the features we would like to see.

    Read the article

  • Plan your SharePoint 2010 Content Type Hub carefully

    - by Wayne
    Currently setting up a new environment on SharePoint 2010 (which was made available for download yesterday if anyone missed that :-). One of the new features of SharePoint 2010 is to set up a Content Type Hub (which is a part of the Metadata Service Application), which is a hub for all Content Types that other Site Collections can subscribe to. That is you only need to manage your content types in one location. Setting up the Content Type Hub is not that difficult but you must make it very careful to avoid a lot of work and troubleshooting. Here is a short tutorial with a few tips and tricks to make it easy for you to get started. Determine location of Content Type Hub First of all you need to decide in which Site Collection to place your Content Type Hub; in the root site collection or a specific one. I think using a specific Site Collection that only acts as a Content Type Hub is the best way, there are no best practice as of now. So I create a new Site Collection, at for instance http://server/sites/CTH/. The top-level site of this site collection should be for instance a Team Site. You cannot use Blank Site by default, which would have been the best option IMHO, since that site does not have the Taxonomy feature stapled upon it (check the TaxonomyFeatureStapler feature for which site templates that can be used). Configure Managed Metadata Service Application Next you need to create your Managed Metadata Service Application or configure the existing one, Central Administration > Application Management > Manage Service Applications. Select the Managed Metadata service application and click Properties if you already have created it. In the bottom of the dialog window when you are creating the service application or when you are editing the properties is a section to fill in the Content Type Hub. In this text box fill in the URL of the Content Type Hub. It is essential that you have decided where your Content Type Hub will reside, since once this is set you cannot change it. The only way to change it is to rebuild the whole managed metadata service application! Also make sure that you enter the URL correctly. I did copy and paste the URL once and got the /default.aspx in the URL which funked the whole service up. Make sure that you only use the URL to the Site Collection of the hub. Now you have to set up so that other Site Collections can consume the content types from the hub. This is done by selecting the connection for the managed metadata service application and clicking properties. A new dialog window opens and there you need to click the Consumes content types from the Content Type Gallery at nnnn. Now you are free to syndicate your Content Types from the Hub. Publish Content Types To publish a Content Type from the hub you need to go to Site Settings > Content Types and select the content type that you would like to publish. Then select Manage publishing for this content type. This takes you to a page from where you can Publish, Unpublish or Republish the content type. Once the content type is published it can take up to an hour for the subscribing Site Collections to get it. This is controlled by the Content Type Subscriber job that is scheduled to run once an hour. To speed up your publishing just go to Central Administration > Monitoring > Review Job Definitions > Content Type Subscriber and click Run now and you content type is very soon available for use. Published Content Type status You can check the status of the content type publishing in your destination site collections by selecting Site Settings > Content Type Publishing. From here you can force a refresh of all subscribed content types, see which ones that are subscribed and finally check the publishing error log. This error log is very useful for detecting errors during the publishing. For instance if you use any features such as ratings, metadata, document ids in your content type hub and your destination site collection does not have those features available this will be reported here.

    Read the article

  • how to install g77 on ubuntu 12.04

    - by ubuntu-beginner
    I want a workin g77 compiler on my Ubuntu 12.04 64 bit laptop. so did the following: 1. I change the sources.list by adding the following lines: deb http...hu.archive.ubuntu.com/ubuntu/ hardy universe deb-src ..//hu.archive.ubuntu.com/ubuntu/ hardy universe deb http:...hu.archive.ubuntu.com/ubuntu/ hardy-updates universe deb-src ..//hu.archive.ubuntu.com/ubuntu/ hardy-updates universe then I on a terminal i did the following: sudo apt-get update sudo apt-get install g77 Things looked very nice then. But when I tried to compile with g77 on my Fortran77 program. I got the following errors: /usr/bin/ld: cannot find crt1.o: No such file or directory /usr/bin/ld: cannot find crti.o: No such file or directory /usr/bin/ld: cannot find -lgcc_s collect2: ld returned 1 exit status Why doesn't the g77 work properly. Many people need g77 why cannot Ubuntu offer a workable g77 ? Please Help me ! Thanks from a ubuntu-beginner

    Read the article

  • Out of disk space - /boot at 100%

    - by uvasal
    My /boot is at 100%. When I run aptitude search ~ilinux-image I'm getting loads of unused images. When I try to delete one of them (after checking which one is currently in use by doing uname -r), e.g apt-get autoremove linux-image-3.2.0-44-generic I get: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-headers-generic (= 3.2.0.51.61) but 3.2.0.54.64 is to be installed linux-server : Depends: linux-headers-server (= 3.2.0.51.61) but 3.2.0.54.64 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And running apt-get -f install throws No space left on device. I've also tried doing apt-get purge but I am getting the same thing. Output of df -h and dpkg -l linux-*.: root@hb2088:/srv/www# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 9.4G 3.0G 6.0G 34% / udev 301M 4.0K 301M 1% /dev tmpfs 124M 228K 124M 1% /run none 5.0M 0 5.0M 0% /run/lock none 309M 0 309M 0% /run/shm /dev/sda1 92M 91M 0 100% /boot root@hb2088:/srv/www# dpkg -l linux-* Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-====================================================-====================================================-======================================================================================================================== un linux-doc-3.2.0 <none> (no description available) ii linux-firmware 1.79.6 Firmware for Linux kernel drivers iU linux-generic 3.2.0.51.61 Complete Generic Linux kernel un linux-headers <none> (no description available) un linux-headers-3 <none> (no description available) un linux-headers-3.0 <none> (no description available) ii linux-headers-3.2.0-44 3.2.0-44.69 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-44-generic 3.2.0-44.69 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-45 3.2.0-45.70 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-45-generic 3.2.0-45.70 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-48 3.2.0-48.74 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-48-generic 3.2.0-48.74 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-51 3.2.0-51.77 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-51-generic 3.2.0-51.77 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP ii linux-headers-3.2.0-52 3.2.0-52.78 Header files related to Linux kernel version 3.2.0 ii linux-headers-3.2.0-52-generic 3.2.0-52.78 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP iU linux-headers-3.2.0-54 3.2.0-54.82 Header files related to Linux kernel version 3.2.0 iU linux-headers-3.2.0-54-generic 3.2.0-54.82 Linux kernel headers for version 3.2.0 on 64 bit x86 SMP iU linux-headers-generic 3.2.0.54.64 Generic Linux kernel headers iU linux-headers-server 3.2.0.54.64 Linux kernel headers on Server Equipment. un linux-image <none> (no description available) un linux-image-3.0 <none> (no description available) ii linux-image-3.2.0-44-generic 3.2.0-44.69 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-45-generic 3.2.0-45.70 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-48-generic 3.2.0-48.74 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-51-generic 3.2.0-51.77 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-52-generic 3.2.0-52.78 Linux kernel image for version 3.2.0 on 64 bit x86 SMP in linux-image-3.2.0-54-generic <none> (no description available) iU linux-image-generic 3.2.0.51.61 Generic Linux kernel image iU linux-image-server 3.2.0.51.61 Linux kernel image on Server Equipment. un linux-initramfs-tool <none> (no description available) un linux-kernel-headers <none> (no description available) un linux-kernel-log-daemon <none> (no description available) ii linux-libc-dev 3.2.0-52.78 Linux Kernel Headers for development un linux-restricted-common <none> (no description available) iU linux-server 3.2.0.51.61 Complete Linux kernel on Server Equipment. un linux-source-3.2.0 <none> (no description available) un linux-tools <none> (no description available) Output of du -sh /boot/*: root@hb2088:~# du -sh /boot/* 781K /boot/abi-3.2.0-44-generic 781K /boot/abi-3.2.0-45-generic 781K /boot/abi-3.2.0-48-generic 781K /boot/abi-3.2.0-51-generic 781K /boot/abi-3.2.0-52-generic 139K /boot/config-3.2.0-44-generic 139K /boot/config-3.2.0-45-generic 139K /boot/config-3.2.0-48-generic 139K /boot/config-3.2.0-51-generic 139K /boot/config-3.2.0-52-generic 1.6M /boot/grub 14M /boot/initrd.img-3.2.0-44-generic 14M /boot/initrd.img-3.2.0-45-generic 14M /boot/initrd.img-3.2.0-48-generic 12K /boot/lost+found 174K /boot/memtest86+.bin 176K /boot/memtest86+_multiboot.bin 2.8M /boot/System.map-3.2.0-44-generic 2.8M /boot/System.map-3.2.0-45-generic 2.8M /boot/System.map-3.2.0-48-generic 2.8M /boot/System.map-3.2.0-51-generic 2.8M /boot/System.map-3.2.0-52-generic 4.8M /boot/vmlinuz-3.2.0-44-generic 4.8M /boot/vmlinuz-3.2.0-45-generic 4.8M /boot/vmlinuz-3.2.0-48-generic 4.8M /boot/vmlinuz-3.2.0-51-generic 4.8M /boot/vmlinuz-3.2.0-52-generic

    Read the article

  • Game based on Ajax polling for 12 players

    - by Azincourt
    I am planning on writing a small browser game. The webserver is a shared server, with no root / install possible. I want to use AJAX for client/server communication. There will be 12 players. So each player would be polling the server for the current game status every X milliseconds (let's say 200ms). So that would be 200ms x 12 players x 5 = 60 requests per second Can Apache handle those requests? What might be the bottlenecks when using this attempt?

    Read the article

  • Replication - between pools in the same system

    - by Steve Tunstall
    OK, I fully understand that's it's been a LONG time since I've blogged with any tips or tricks on the ZFSSA, and I'm way behind. Hey, I just wrote TWO BLOGS ON THE SAME DAY!!! Make sure you keep scrolling down to see the next one too, or you may have missed it. To celebrate, for the one or two of you out there who are still reading this, I got something for you. The first TWO people who make any comment below, with your real name and email so I can contact you, will get some cool Oracle SWAG that I have to give away. Don't get excited, it's not an iPad, but it pretty good stuff. Only the first two, so if you already see two below, then settle down. Now, let's talk about Replication and Migration.  I have talked before about Shadow Migration here: https://blogs.oracle.com/7000tips/entry/shadow_migrationShadow Migration lets one take a NFS or CIFS share in one pool on a system and migrate that data over to another pool in the same system. That's handy, but right now it's only for file systems like NFS and CIFS. It will not work for LUNs. LUN shadow migration is a roadmap item, however. So.... What if you have a ZFSSA cluster with multiple pools, and you have a LUN in one pool but later you decide it's best if it was in the other pool? No problem. Replication to the rescue. What's that? Replication is only for replicating data between two different systems? Who told you that? We've been able to replicate to the same system now for a few code updates back. These instructions below will also work just fine if you're setting up replication between two different systems. After replication is complete, you can easily break replication, change the new LUN into a primary LUN and then delete the source LUN. Bam. Step 1- setup a target system. In our case, the target system is ourself, but you still have to set it up like it's far away. Go to Configuration-->Services-->Remote Replication. Click the plus sign and setup the target, which is the ZFSSA you're on now. Step 2. Now you can go to the LUN you want to replicate. Take note which Pool and Project you're in. In my case, I have a LUN in Pool2 called LUNp2 that I wish to replicate to Pool1.  Step 3. In my case, I made a Project called "Luns" and it has LUNp2 inside of it. I am going to replicate the Project, which will automatically replicate all of the LUNs and/or Filesystems inside of it.  Now, you can also replicate from the Share level instead of the Project. That will only replicate the share, and not all the other shares of a project. If someone tells you that if you replicate a share, it always replicates all the other shares also in that Project, don't listen to them.Note below how I can choose not only the Target (which is myself), but I can also choose which Pool to replicate it to. So I choose Pool1.  Step 4. I did not choose a schedule or pick the "Continuous" button, which means my replication will be manual only. I can now push the Manual Replicate button on my Actions list and you will see it start. You will see both a barber pole animation and also an update in the status bar on the top of the screen that a replication event has begun. This also goes into the event log.  Step 5. The status bar will also log an event when it's done. Step 6. If you go back to Configuration-->Services-->Remote Replication, you will see your event. Step 7. Done. To see your new replica, go to the other Pool (Pool1 for me), and click the "Replica" area below the words "Filesystems | LUNs" Here, you will see any replicas that have come in from any of your sources. It's a simple matter from here to break the replication, which will change this to a "Local" LUN, and then delete the original LUN back in Pool2. Ok, that's all for now, but I promise to give out more tricks sometime in November !!! There's very exciting stuff coming down the pipe for the ZFSSA. Both new hardware and new software features that I'm just drooling over. That's all I can say, but contact your local sales SC to get a NDA roadmap talk if you want to hear more.   Happy Halloween,Steve 

    Read the article

  • "The program 'g++' can be found in the following packages"

    - by mezamorphic
    I have g++ 4.6, 4.7 and 4.8 installed, but not g++ itself. I am using Ubuntu 12.04. If I do: g++ --version it says: The program 'g++' can be found in the following packages: * g++ * pentium-builder I have tried the following: sudo apt-get update then sudo apt-get -f install then sudo apt-get install g++ but still I get the same when checking g++ version. Please help? Doing apt-cache policy g++ yields: g++: Installed: 4:4.6.3-1ubuntu5 Candidate: 4:4.6.3-1ubuntu5 Version table: *** 4:4.6.3-1ubuntu5 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages 100 /var/lib/dpkg/status

    Read the article

  • Mysql my.cnf as simbolic link in Ubuntu 12.04

    - by Juan Cruz
    I am not able to use symlink for my.cnf file (Ubuntu 12.04 server). I added the alias to /etc/apparmor.d/tunables/alias file (as I did for 10.04 and worked) but I get: May 30 16:00:01 ip-10-242-209-203 kernel: [176926.213403] type=1400 audit(1338393601.350:244): apparmor="DENIED" operation="open" parent=1 profile="/usr/sbin/mysqld" name="/opt/data/my.cnf" pid=18128 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 May 30 16:00:01 ip-10-242-209-203 kernel: [176926.222016] init: mysql main process (18128) terminated with status 1 May 30 16:00:01 ip-10-242-209-203 kernel: [176926.222084] init: mysql respawning too fast, stopped As a workaround I added the following line /etc/mysql/my.cnf r, to the /etc/apparmor.d/local/usr.sbin.mysqld file. The default configuration is /etc/mysql/*.cnf r, Is this a bug? is an apparmor bug or a mysql bug? It seems that that configuration has changed since MySql 5.1 (https://bugs.launchpad.net/ubuntu/+source/mysql-5.1/+bug/619172) but now worked for me. Thanks!

    Read the article

  • Google+ Platform Office Hours for March 28, 2012: Hangouts API v1.0

    Google+ Platform Office Hours for March 28, 2012: Hangouts API v1.0 Here's another video from a previous session of our office hours. Watch this video to learn about the Hangouts Apps launch from +Wolff and +Jonathan. Discuss this video on Google+: goo.gl 3:31 - Publishing your hangout app 4:28 - Hangout applications vs extensions 8:00 - The application switcher 9:58 - On the terms of service, privacy policy and support contact fields 12:07 - OAuth client and hangout apps featuring the API console 15:50 - Registering as a Chrome web store developer 17:44 - Linking to your hangout 20:25 - The hangout button 24:33 - How data URIs can make things easier in your apps Q&A 29:00 - What's the status of the REST APIs? 30:41 - How do I set the hangout topic or title? 31:19 - How do those of us in other time zones know when your office hours will be held? 34:04 - Can I use the hangout button with other peoples' hangout apps? From: GoogleDevelopers Views: 2788 28 ratings Time: 35:18 More in Science & Technology

    Read the article

  • Can't get wireless on macbook pro 8,2

    - by Jeff
    I'm a linux Newb, and I have tried several of the fixes listed to try and get my wifi drivers to work, but to no avail. Does anyone here know why this isn't working for me, or better yet, how to fix it? Under lspci -vvv I get the following output: 03:00.0 Network controller: Broadcom Corporation BCM4331 802.11a/b/g/n (rev 02) Subsystem: Apple Inc. AirPort Extreme Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- Kernel modules: bcma With sudo lshw -class network I get this output: *-network UNCLAIMED description: Network controller product: BCM4331 802.11a/b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 02 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:b0600000-b0603fff Any help would be greatly appreciated!

    Read the article

  • How to Fix "Read-only file system" error when I run something as sudo and try to make a folder/file?

    - by Andrew
    When I try to save something or rename a file/folder it say this error " Read-only file system" or run something as root in the terminal it say this error sudo: unable to open /var/lib/sudo/"My User Name"/0: Read-only file system W: Not using locking for read only lock file /var/lib/dpkg/lock E: Unable to write to /var/cache/apt/ E: The package lists or status file could not be parsed or opened. When I make a Folder the error dialog in the details with Nautilus is this: Error creating directory: Read-only file system I would show you I picture of it but it isn't even letting my save onto my flash drive. Please help me.

    Read the article

  • Detecting 404 errors after a new site design

    - by James Crowley
    We recently re-designed Developer Fusion and as part of that we needed to ensure that any external links were not broken in the process. In order to monitor this, we used the awesome LogParser tool. All you need to do is open up a command prompt, navigate to the directory with your web site's log files in, and run a query like this: "c:\program files (x86)\log parser 2.2\logparser" "SELECT top 500 cs-uri-stem,count(*) FROM u_ex*.log WHERE sc-status=404 GROUP BY cs-uri-stem order by count(*) desc" -rtp:-1 topMissingUrls.txt And you've got a text file with the top 500 requested URLs that are returning 404. Simple!

    Read the article

  • Do professional software developers still dream of creating industry/world-changing apps?

    - by Andrew Heath
    I'm a hobby programmer. The absence of real world deadlines, customer feedback, or performance reviews leaves me free to daydream about having and implementing The Next Great Idea That Changes the World. Of course I'm aware I probably have a better chance of winning the lottery, but it's fun to imagine knocking out some fully-homebrewed app that destroys the status quo. I know many professional programmers have side projects, some for profit others not. I was wondering on the way to work this morning (non-IT boring work) if having to code for your food tended to dampen the dreaming? Does greater experience leave you jaded and more focused on the projects at hand? Not trying to be a downer, just interested in the mindset of the real software professional :-)

    Read the article

  • Oracle is sponsoring LinuxCon Japan 2012

    - by Zeynep Koch
    LinuxCon Japan is the premiere Linux conference in Asia that brings together a unique blend of core developers, administrators, users, community managers and industry experts. It is designed not only to encourage collaboration but to support future interaction between Japan and other Asia Pacific countries and the rest of the global Linux community. The conference includes presentations, tutorials, birds of a feather sessions, keynotes, sponsored mini-summits. LinuxCon Japan will be showcasing Oracle Linux in  following sessions, as well as Technology Showcase booth.  Wednesday, June 6: 2:00 pm - Btrfs Filesystems : Status and New Features, Chris Mason, Oracle, Room 502 Friday, June 8: 3:00 pm - State of Linux Kernel Security Subsystem, James Morris, Oracle, Room 502 5:25 pm - International/Asian Kernel Developer Panel that features Oracle's Chris Mason Register to attend these great sessions.

    Read the article

  • BIND9 server not responding to external queries

    - by Twitchy
    I have set up a BIND server on my dedicated box which I want to host a nameserver for my domain on. When I use dig @202.169.196.59 nzserver.co.nz locally on the server I get the following response... ; <<>> DiG 9.8.1-P1 <<>> @202.169.196.59 nzserver.co.nz ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43773 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;nzserver.co.nz. IN A ;; ANSWER SECTION: nzserver.co.nz. 3600 IN A 202.169.196.59 ;; AUTHORITY SECTION: nzserver.co.nz. 3600 IN NS ns2.nzserver.co.nz. nzserver.co.nz. 3600 IN NS ns1.nzserver.co.nz. ;; ADDITIONAL SECTION: ns1.nzserver.co.nz. 3600 IN A 202.169.196.59 ns2.nzserver.co.nz. 3600 IN A 202.169.196.59 ;; Query time: 0 msec ;; SERVER: 202.169.196.59#53(202.169.196.59) ;; WHEN: Sat Oct 27 15:40:45 2012 ;; MSG SIZE rcvd: 116 Which is good, and is the output I want. But when simply using dig nzserver.co.nz I get... ; <<>> DiG 9.8.1-P1 <<>> nzserver.co.nz ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 16970 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;nzserver.co.nz. IN A ;; Query time: 308 msec ;; SERVER: 202.169.192.61#53(202.169.192.61) ;; WHEN: Sat Oct 27 17:09:12 2012 ;; MSG SIZE rcvd: 32 And if I use dig @202.169.196.59 nzserver.co.nz on another linux machine I get... ; <<>> DiG 9.7.3 <<>> @202.169.196.59 nzserver.co.nz ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached Am I doing something wrong here? Port 53 is definitely open. /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 202.169.192.61; 202.169.206.10; }; listen-on { 202.169.196.59; }; }; /etc/bind/named.conf.local zone "nzserver.co.nz" { type master; file "/etc/bind/nzserver.co.nz.zone"; }; /etc/bind/nzserver.co.nz.zone ; BIND db file for nzserver.co.nz $ORIGIN nzserver.co.nz. @ IN SOA ns1.nzserver.co.nz. mr.steven.french.gmail.com. ( 2012102606 28800 7200 864000 3600 ) NS ns1.nzserver.co.nz. NS ns2.nzserver.co.nz. MX 10 mail.nzserver.co.nz. @ IN A 202.169.196.59 * IN A 202.169.196.59 ns1 IN A 202.169.196.59 ns2 IN A 202.169.196.59 www IN A 202.169.196.59 mail IN A 202.169.196.59

    Read the article

  • links for 2010-04-07

    - by Bob Rhubart
    James McGovern: Enterprise Architecture and Social CRM "With a few exceptions, the vast majority of enterprise architects I know spend an awful lot of time focused on internal issues whether it is rationalization, the cloud, storage governance, data center consolidation, creation of reference architectures, portfolio management and other considerations that aren’t even visible to customers. One should ask whether IT can be truly successful if we are busy listening to the business but otherwise are blissfully ignorant towards the customers they serve." -- James McGovern (tags: enterprisearchitecture crm socialcomputing) WRF Benchmark: X6275 Beats Power6 - BestPerf "Oracle's Sun Blade X6275 cluster is 28% faster than the IBM POWER6 cluster on Weather Research and Forecasting (WRF) continental United Status (CONUS) benchmark datasets. The Sun Blade X6275 cluster used a Quad Data Rate (QDR) InfiniBand connection along with Intel compilers and MPI." (tags: oracle sun x6275 benchmarks)

    Read the article

  • nginx PPA does not work?

    - by Peter Smit
    I want to use the newest version of nginx, so I wanted to add the nginx/stable ppa sudo add-apt-repository ppa:nginx/stable sudo apt-get update However, the upgrade command says that there are no upgrades available and nginx is still the old version. Did I do something wrong? I use Ubuntu server 10.04 Lucid add-apt-repository output: $ sudo apt-add-repository ppa:nginx/stable Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv 8B3981E7A6852F782CC4951600A6F0A3C300EE8C gpg: requesting key C300EE8C from hkp server keyserver.ubuntu.com gpg: key C300EE8C: "Launchpad Stable" not changed gpg: Total number processed: 1 gpg: unchanged: 1 apt-cache policy ouput: $ sudo apt-cache policy nginx nginx: Installed: 0.7.65-1ubuntu2 Candidate: 0.7.65-1ubuntu2 Version table: *** 0.7.65-1ubuntu2 0 500 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu/ lucid/universe Packages 100 /var/lib/dpkg/status

    Read the article

  • Why is mysql unable to configure?

    - by Tijs
    I try to install mysql-server on my Ubuntu vps server, and it's able to install, but not to configure. If it tries to configure I get this: ...fail! invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.0 (--configure): subprocess post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.0; however: Package mysql-server-5.0 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-5.0 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) Does someone know a solution to this? Btw, my other question can be closed, it's solved, but I cant comment on it anymore.

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Winetricks fails to find program files directory

    - by EgyptLovesUbuntu
    I installed a fresh copy of Ubuntu 12 desktop then: Installed WINE from the Ubuntu Software Center. Installed WineTricks from the Ubuntu Software Center. When I type the following commands in the terminal: sudo winetricks dotnet40 I get this error message: wine cmd.exe /c echo '%ProgramFiles%' returned empty string If i try the command without sudo winetricks dotnet40 The output is as follows Executing w_do_call dotnet40 Executing load_dotnet40 ------------------------------------------------------ dotnet40 does not yet fully work or install on wine. Caveat emptor. ------------------------------------------------------ Executing mkdir -p /home/vectoruser/.cache/winetricks/dotnet40 mkdir: cannot create directory `/home/vectoruser/.cache/winetricks/dotnet40': Permission denied ------------------------------------------------------ Note: command 'mkdir -p /home/vectoruser/.cache/winetricks/dotnet40' returned status 1. Aborting. ------------------------------------------------------ My current user is vectoruser which i use to logon to Ubuntu The output of ls -ld /home/vectoruser /home/vectoruser/.cache /home/vectoruser/.cache/winetricks Gives: drwxr-xr-x 32 vectoruser vectoruser 4096 Aug 2 19:26 /home/vectoruser drwx------ 19 vectoruser vectoruser 4096 Aug 2 19:25 /home/vectoruser/.cache drwxr-xr-x 2 root root 4096 Aug 2 18:09 /home/vectoruser/.cache/winetricks

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >