Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 256/976 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • cannot get upstart to run user job

    - by dre
    I am trying to get upstart start a user job during the boot of my machine. I have my conky.conf upstart config file in my $HOME/.init directory. Wenn I run "start conky" I get this error: dre@dre-laptop:~$ start conky start: Rejected send message, 1 matched rules; type="method_call", sender=":1.76" (uid=1000 pid=2843 comm="start conky ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init") dre@dre-laptop:~$ I (think I) know that this error has to do with the d-bus system (authentification). I also read (http://upstart.ubuntu.com/cookbook/#id96) that ubuntu 12.10 already has the right configuration in the d-bus config file "/etc/dbus-1/system.d/Upstart.conf" to allow normal users to use upstart. dre@dre-laptop:~/.init$ cat conky.conf description "conky, a system monitor appled" start on lightdm stop on shutdown # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background #expect fork # Start conky exec /usr/bin/conky So, who knows, what do I do wrong??? greetings Andre No one??? Please... give me your best shot.

    Read the article

  • What benefits can I get upgrading my ASP.NET (Webform) + DAL(EF) + Repository + BLL structure to MVC?

    - by Etienne
    I'm in the process of defining an approach that may best fit our needs for a big web application development. For now, I'm thinking going with an ASP.NET Architecture with a DAL using Entity Framework, a Repository concept to not access DAL directly from BLL and a BLL that call the repository and make every manipulations necessary to prepare data to push in a presentation layer (.aspx files). I don't plan to use ASP.Net controls and prefer to keep things simple and lightweight using plain html, jQuery UI controls and do most of the server calls with jQuery Ajax. Sometimes, when needed, I plan to use handlers (.ashx) to call BLL methods that will return JSON or HTML to client for dynamic stuff. My solution also has a test project that Mock the Repository with in-memory data to not repose on database for testing BLL methods... It may be usefull to add that we will build a big application over this architecture with hundreds of tables and store procedures with a lot of reading and writing to database. My question is, having this architecture in mind, Is there any evident advantages that I can obtain by using an MVC3 project instead of the described architecture base on Webform? Do you see any problem in this architecture that may cause us problem during the next steps of development? I know the MVC pattern for using it in others projects with Django... but the Microsoft MVC implementation look so much more complex and verbose than Django MVC and it's why I'm hesitating (or waiting for a little push?) right now before jumping into it... We are in a real project with deadlines and don't want to slow the development process without any real benefits.

    Read the article

  • Design pattern for client/server sessions?

    - by nonot1
    Are there any common patterns or general guidance I can learn from for how to design a client/server system where the both the client and server must maintain some kind per-client session state? I've found any number of libraries that can help with some of the plumbing, but it's the overall design I'm wondering about. Open issues in my mind: How to structure the client/server communication so that bidirectional synchronous and asynchronous requests are possible? The server side needs to spawn a couple of per-connected-client session-long helper process. How to manage that? How to manage the mapping from a given client (and any of it's requests) to server state and helper process instances in the face of multiple clients and intermittent network connectivity. Most communication can be simple blocking request/reply, but some will be long running processing tasks that the client will want to keep tabs on. To the extent that it matters, the platform is Linux/C/C++. Not web based. Just an existing thick-client software app being modified to talk to backend servers for some tasks.

    Read the article

  • Portal and Content - Components, part 3 – Applied Customization Framework (4 of 7)

    - by Stefan Krantz
    Have you ever been challenged with the situation where your work task asks you to implement functionality in the WebCenter Portal and you browse through the Resource Catalog (Business Dictionary) and find the functionality you need. However when you get started there is small short comings and you ask your self- how can I re-use what is out of the box ca?- I wonder what code I need to use to produce the similar functions and include my new requirements?- Must I write a new taskflow? The answer to above questions are in many times answered with simply you can  do a taskflow customization to out-of-the-box taskflows. In this post I will help you understand how to do such customization. Best described is a 4 step process, see image flow below for illustration: Just to clarify few naming confusions that might occur when go through above process. Customization Role is a function within JDeveloper that will allow you to implement view and flow customizations to existing taskflows WebCenter Portal – Spaces Taskflow Customization Framework this technology scope do not only refer to WebCenter Spaces, this also include WebCenter Portal/Framework A taskflow customization do not overwrite or replace any code, it just creates an additional tip view of the taskflow in the MDS for the current application (WebCenter Portal or WebCenter Spaces) To sum up this simple procedure I also like to help you find your way around the main topic for this post series, this post series is focusing primarily on Content integration with WebCenter Portal, so where can I find content related taskflows in the WebCenter Libraries. The list below mention some useful locations to taskflows and each taskflow page fragments. Library Reference - WebCenter Document Library Service View Content Presenter Path: oracle.webcenter.doclib.view.jsf.taskflows.presenterTaskflow: contentPresenter.xml - The Content Presenter taskflowTaskflow: contentPresenterWizard.xml - The publishing wizard to select content, select template and preview including contributionDocument Manager Path: oracle.webcenter.doclib.view.jsf.taskflows.docManager Taskflow: documentManager.xml - The Document Manager taskflow which includes references to document management feature including browsing, download, uploading and viewing. For more information on Taskflow customizations please see following documentation:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_taskflows.htm#BACIEGJD

    Read the article

  • How to install Windows (x86/x64) on Linux (Ubuntu)

    - by yorrany
    I installed Ubuntu edition (10.04) on my windows 7, completely eliminating it to the original installation. After I was forced to reverse the process, but could not find tools or explanations of how to do it. To clarify the equipment, it is: a netbook, acer, no optical drive cd / dvd, the process should be fully via USB. I hope I was clear enough, count on the support of you. Thank you. -- Instalei a edição Ubuntu (10.04) sobre meu Windows 7, eliminando completamente a a instalação original. Depois fui forçado à reverter o processo, mas não encontrei ferramentas ou explicações de como fazê-lo. Para esclarecer sobre o equipamento, trata-se de: um netbook, acer, sem leitor óptico de cd/dvd, o processo deverá ser totalmente via USB. Espero ter sido bastante claro, conto com o suporte de vocês. Muito obrigado.

    Read the article

  • How to install Windows (x86/x64) on Linux (Ubuntu)

    - by yorrany
    I installed Ubuntu edition (10.04) on my windows 7, completely eliminating it to the original installation. After I was forced to reverse the process, but could not find tools or explanations of how to do it. To clarify the equipment, it is: a netbook, acer, no optical drive cd / dvd, the process should be fully via USB. I hope I was clear enough, count on the support of you. Thank you. -- Instalei a edição Ubuntu (10.04) sobre meu Windows 7, eliminando completamente a a instalação original. Depois fui forçado à reverter o processo, mas não encontrei ferramentas ou explicações de como fazê-lo. Para esclarecer sobre o equipamento, trata-se de: um netbook, acer, sem leitor óptico de cd/dvd, o processo deverá ser totalmente via USB. Espero ter sido bastante claro, conto com o suporte de vocês. Muito obrigado.

    Read the article

  • How to translate formulas into form of natural language?

    - by Ricky
    I am recently working on a project aiming at evaluating whether an android app crashes or not. The evaluation process is 1.Collect the logs(which record the execution process of an app). 2.Generate formulas to predict the result (formulas is generated by GP) 3.Evaluate the logs by formulas Now I can produce formulas, but for convenience for users, I want to translate formulas into form of natural language and tell users why crash happened.(I think it looks like "inverse natural language processing".) To explain the idea more clearly, imagine you got a formula like this: 155 - count(onKeyDown) >= 148 It's obvious that if count(onKeyDown) 7, the result of "155 - count(onKeyDown) = 148" is false, so the log contains more than 7 onKeyDown event would be predicted "Failed". I want to show users that if onKeyDown event appears more than 7 times(155-148=7), this app will crash. However, the real formula is much more complicated, such as: (< !( ( SUM( {Att[17]}, Event[5]) <= MAX( {Att[7]}, Att[0] >= Att[11]) OR SUM( {Att[17]}, Event[5]) > MIN( {Att[12]}, 734 > Att[19]) ) OR count(Event[5]) != 1 ) > (< count(Att[4] = Att[3]) >= count(702 != Att[8]) + 348 / SUM( {Att[13]}, 641 < Att[12]) mod 587 - SUM( {Att[13]}, Att[10] < Att[15]) mod MAX( {Att[13]}, Event[2]) + 384 > count(Event[10]) != 1)) I tried to implement this function by C++, but it's quite difficult, here's the snippet of code I am working right now. Does anyone knows how to implement this function quickly?(maybe by some tools or research findings?)Any idea is welcomed: ) Thanks in advance.

    Read the article

  • How to force remove a package if dpkg removal script fails?

    - by fodon
    I'm trying to remove a package where I deleted the /etc/init.d/disco-master file (in an attempt to remove the package manually). I want to remove the disco-master package. How do I do this now? This is what happens when I do sudo apt-get remove disco-master: removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--remove): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master E: Sub-process /usr/bin/dpkg returned an error code (1) When I do sudo apt-get install --reinstall disco-master I get the following: You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.2+nmu1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I do sudo apt-get -f install I get this: Unpacking disco-node (from .../disco-node_0.4.2+nmu1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb (--unpack): trying to overwrite '/usr/lib/disco/master/ebin/disco.app', which is also in package disco-master 0.4.1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) When I run sudo apt-get remove disco-node I get the following: Package disco-node is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.1) but it is not going to be installed Depends: python-disco (= 0.4.1) but 0.4.2+nmu1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I did sudo dpkg -P --force-all disco-master I got: Removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--purge): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • Bug in Delphi XE RegularExpressions Unit

    - by Jan Goyvaerts
    Using the new RegularExpressions unit in Delphi XE, you can iterate over all the matches that a regex finds in a string like this: procedure TForm1.Button1Click(Sender: TObject); var RegEx: TRegEx; Match: TMatch; begin RegEx := TRegex.Create('\w+'); Match := RegEx.Match('One two three four'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Or you could save yourself two lines of code by using the static TRegEx.Match call: procedure TForm1.Button2Click(Sender: TObject); var Match: TMatch; begin Match := TRegEx.Match('One two three four', '\w+'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Unfortunately, due to a bug in the RegularExpressions unit, the static call doesn’t work. Depending on your exact code, you may get fewer matches or blank matches than you should, or your application may crash with an access violation. The RegularExpressions unit defines TRegEx and TMatch as records. That way you don’t have to explicitly create and destroy them. Internally, TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx is a class that needs to be created and destroyed like any other class. If you look at the TRegEx source code, you’ll notice that it uses an interface to destroy the TPerlRegEx instance when TRegEx goes out of scope. Interfaces are reference counted in Delphi, making them usable for automatic memory management. The bug is that TMatch and TGroupCollection also need the TPerlRegEx instance to do their work. TRegEx passes its TPerlRegEx instance to TMatch and TGroupCollection, but it does not pass the instance of the interface that is responsible for destroying TPerlRegEx. This is not a problem in our first code sample. TRegEx stays in scope until we’re done with TMatch. The interface is destroyed when Button1Click exits. In the second code sample, the static TRegEx.Match call creates a local variable of type TRegEx. This local variable goes out of scope when TRegEx.Match returns. Thus the reference count on the interface reaches zero and TPerlRegEx is destroyed when TRegEx.Match returns. When we call MatchAgain the TMatch record tries to use a TPerlRegEx instance that has already been destroyed. To fix this bug, delete or rename the two RegularExpressions.dcu files and copy RegularExpressions.pas into your source code folder. Make these changes to both the TMatch and TGroupCollection records in this unit: Declare FNotifier: IInterface; in the private section. Add the parameter ANotifier: IInterface; to the Create constructor. Assign FNotifier := ANotifier; in the constructor’s implementation. You also need to add the ANotifier: IInterface; parameter to the TMatchCollection.Create constructor. Now try to compile some code that uses the RegularExpressions unit. The compiler will flag all calls to TMatch.Create, TGroupCollection.Create and TMatchCollection.Create. Fix them by adding the ANotifier or FNotifier parameter, depending on whether ARegEx or FRegEx is being passed. With these fixes, the TPerlRegEx instance won’t be destroyed until the last TRegEx, TMatch, or TGroupCollection that uses it goes out of scope or is used with a different regular expression.

    Read the article

  • Find files containing a string on the whole filesystem

    - by Fabio
    I need to find all the instances of a given string in the whole filesystem, because I don't remember in which configuration files, script or any other programs I put it and I need to update that string with a new one. I tried with the following command `grep -nr 'needle' / --exclude-dir=.svn | mail [email protected] -s 'References on xxx' If I run this command on a small directory it gives me the output I need in the form /path1/:nn:line containing needle /path2/:nn:line containing needle where /path1 is the full path of the file, nn is the row containing the needle and last field is the content of the line. However when I run the command on the root directory the grep process hang after a while. I run this script about 8 hours ago and even on a small filesystem (less than 5GB) it doesn't end and if I run top or ps the process seems sleeping root 24909 0.0 0.1 3772 1520 pts/1 S+ Feb10 0:15 grep -nr needle / --exclude-dir=.svn Why it doesn't end? Is there any better way to do this (it's a one time job, I don't need to execute this more than once) Thanks.

    Read the article

  • Custom Session Management using HashTable

    - by kaleidoscope
    ASP.NET session state lets you associate a server-side string or object dictionary containing state data with a particular HTTP client session. A session is defined as a series of requests issued by the same client within a certain period of time, and is managed by associating a session ID with each unique client. The ID is supplied by the client on each request, either in a cookie or as a special fragment of the request URL. The session data is stored on the server side in one of the supported session state stores, which include in-process memory, SQL Server™ database, and the ASP.NET State Server service. The latter two modes enable session state to be shared among multiple Web servers on a Web farm and do not require server affinity. Implement Custom session Handler you need to follow following process : 1. Create class library which will inherit from  SessionStateStoreProviderBase abstract Class. 2. Implement all abstract Method in your base class. 3.Change Mode of session to “Custom” in web.config file and provide Provider as your Namespace with classname. <sessionState mode=”Custom” customProvider=”Namespace.classname”> <Providers> <add name=”Name” type=”Namespace.classname”> </sessionstate> For more Details Please refer following links :   http://msdn.microsoft.com/en-us/magazine/cc163730.aspx http://msdn.microsoft.com/en-us/library/system.web.sessionstate.sessionstatestoreproviderbase.aspx - Chandraprakash, S Technorati Tags: Chandraprakash,Session state Managment

    Read the article

  • TIME column in TOP command for mysql

    - by michael
    When I run top on my database server I get that mysqld has been running for 4:00.51 and it continues to go up. I assume this means that one process with mysql has been running this long from other posts on here. Its not set to cumulative mode as best I can tell as the heading looks like it would change to CTIME if that be the case. What I'm wondering is if this is normal for a site that makes a lot of individual connections using PHP. I shouldn't have any long running processes that would hold on to a mysql connection for this long, only seconds at most. Am I incorrect to assume that this time relates to one connection/process running? I think usually I see it flash up and away on the TOP, not just stay there with this number increasing.

    Read the article

  • Cannot install SQL Server (2012) PowerPivot for SharePoint, always fails Sharepoint Version check

    - by ProfessionalAmateur
    Trying to install a fresh install of Sharepoint 2010 (w/ SP1) and SQL Server 2012 PowerPivot for Sharepoint. The prerequisites clearly show that Sharepoint 2010 SP1 is needed, which we have installed. However after when trying to install the SQL Server portion we consistently fail the rule SharePoint version requirement for PowerPivot for SharePoint' validation in theSQL Server` install process. Here is the process we are following: 1. install Sharepoint 2010 2. install Sharepoint 2010 SP1 3. install SQL Server 2012 PowerPivot for SharePoint Here is a screen shot of the error and the log file error. We are completely stuck at this point, anyone run into this before?

    Read the article

  • How can I log all traffic with its exact length?

    - by Legate
    I want to process all packets with their size going through our gateway server (running Debian 4.0). My idea is to use tcpdump, but I have two questions. The command I'm currently thinking of is tcpdump -i iface -n -t -q. Is it guaranteed that tcpdump will process all packets? What happens if the CPU is working to full capacity? The format of the output lines is IP ddd.ddd.ddd.ddd.port > ddd.ddd.ddd.ddd.port: tcp 1260. What exactly is 1260? I have the suspicion that it is the payload in bytes of the packet, which would be exactly what I need, but I'm not sure. It might be the TCP Window Size. Or perhaps there is an even better way of doing this? I thought about a LOG rule in iptables, but tcpdump seems easier and I don't know whether iptables can log the packet lengths.

    Read the article

  • How can I upgrade my server's kernel without rebooting?

    - by Oli
    This is a loaded question because I'm already aware of, and am very interested in ksplice. The problem is that since they were bought by Oracle, they have been forced to pull numerous server distributions from the offerings. The answer isn't as simple as it once was. I noticed a question on Unix.SE that states: You can build your own ksplice patches to dynamically load into your own kernel Great! But how?! I've installed the free ksplice package in the repo on my desktop (not ksplice-uptrack which is non-free) and now want to generate and apply updates. What's the process? Are there any scripts out there to automate the process? Moreover, if all the machinery required for rebootless upgrades is sitting there in the kernel (and ksplice package), why on earth aren't we taking advantage of it by default? Note 1: I am happy for a solution beside ksplice but it has to deliver the same thing: rolling updates to the kernel that can be applied without rebooting the server. Note 2: I'll say it again; the main ksplice "service" does not support Ubuntu Server. It used to but it doesn't any more. When I talk about wanting to use ksplice, I'm talking about the open source tools in the ksplice package. Any answer that talks about ksplice-uptrack is probably not what I'm after as this is the part that integrates directly with aforementioned "service".

    Read the article

  • Why is my root filesystem always scanned at boot?

    - by luri
    I always have a pause at boot saying my filesystems are being checked (with a "press C to cancel" note, too). Actually (seeing boot.log) I think it's the / fs, which is located at /dev/sdb5 Several questions altoghether, here (hope this does not break any rule): Is this normal? Can I (or even should I) prevent this anyhow? According to boot.log (below) the fs does not seem to be 'clean', or, at least, it's in an state or condition that makes fsck always can it for errors for a while (just a few seconds). How can I fix it? Edit: This is my boot.log: fsck desde util-linux-ng 2.17.2 udevd[515]: can not read '/etc/udev/rules.d/z80_user.rules' /dev/sdb5: 249045/32841728 ficheros (0.3% no contiguos), 20488485/131338752 bloques init: ureadahead-other main process (1111) terminated with status 4 init: ureadahead-other main process (1116) terminated with status 4 Password: * Starting AppArmor profiles [160G Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox [154G[ OK ] * Setting sensors limits [160G [154G[ OK ] And this is dumpe2fs results for the filesystem being checked (well, the relevant part of the log): Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32841728 Block count: 131338752 Reserved block count: 6566937 Free blocks: 110850356 Free inodes: 32592701 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Dec 10 19:44:15 2010 Last mount time: Mon Feb 14 17:00:02 2011 Last write time: Mon Feb 14 16:59:45 2011 Mount count: 1 Maximum mount count: 33 Last checked: Mon Feb 14 16:59:45 2011 Check interval: 15552000 (6 months) Next check after: Sat Aug 13 17:59:45 2011 Lifetime writes: 331 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28049496 Default directory hash: half_md4 Directory Hash Seed: d3d24459-514b-4413-b840-e970b766095b Journal backup: inode blocks Journal features: journal_incompat_revoke Tamaño de fichero de transacciones: 128M Journal length: 32768 Journal sequence: 0x0005e0c4 Journal start: 1 This is the relevant (at least I think this is the fs being checked) line in fstab: #Entry for /dev/sdb5 : UUID=42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 / ext4 errors=remount-ro 0 1

    Read the article

  • Open a terminal window & run command, then close the terminal window if command completed successfully?

    - by Caspar
    I'm trying to write a script to do the following: Open a terminal window which runs a long running command (Ideally) move the terminal window to the top left corner of the screen using xdotool Close the terminal window only if the long running command exited with a zero return code To put it in Windows terms, I'd like to have the Linux equivalent of start cmd /c long_running_cmd if long_running_cmd succeeds, and do the equivalent of start cmd /k long_running_cmd if it fails. What I have so far is a script which starts xterm with a given command, and then moves the window as desired: #!/bin/bash # open a new terminal window in the background with the long running command xterm -e ~/bin/launcher.sh ./long_running_cmd & # move the terminal window (requires window process to be in background) sleep 1 xdotool search --name launcher.sh windowmove 0 0 And ~/bin/launcher.sh is intended to run whatever is passed as a command line argument to it: #!/bin/bash # execute command line arguments $@ But, I haven't been able to get the xterm window to close after long_running_cmd is done. I think something like xterm -e ~/bin/launcher.sh "./long_running_cmd && kill $PPID" & might be what I'm after, so that xterm is launched in the background and it runs ./long_running_cmd && kill $PPID. So the shell in the xterm window then runs the long running command and if it completes successfully, the parent process of the shell (i.e. the process owning the xterm window) is killed, thereby closing the xterm window. But, that doesn't work: nothing happens, so I suspect my quoting or escaping is incorrect, and I haven't been able to fix it. An alternate approach would be to get the PID of long_running_cmd, use wait to wait for it to finish, then kill the xterm window using kill $! (since $! refers to last task started in the background, which will be the xterm window). But I can't figure out a nice way to get the PID & exit value of long_running_cmd out of the shell running in the xterm window and into the shell which launched the xterm window (short of writing them to a file somewhere, which seems like it should be unnecessary?). What am I doing wrong, or is there an easier way to accomplish this?

    Read the article

  • Podcast Show Notes: By Any Other Name: Governance and Architecture

    - by Bob Rhubart
    The OTN ArchBeat Podcast returns from a brief summer hiatus with a three-part conversation about IT architecture and governance. My guests for this conversation are Eric Stephens , an Oracle Enterprise architect and a frequent guest on this program. Joining Eric on the panel is Tim Hall , Senior Director of product management for the Oracle Enterprise Repository, Oracle Service Registry, and Oracle Application Integration Architecture. Tim made his first appearance on ArchBeat as panelist on the recent program featuring Thomas Erl. The Conversation Listen to Part 1:Why it's important to revive the dormant conversation about IT governance. Listen to Part 2 (Sept 19): Balancing functional, technical, operational requirements to meet the challenge of defining appropriate governance "guardrails." Listen to Part 3 (Sept 26): Bringing IT architecture out of the ivory tower to make governance a less intimidating, more collaborative process. Additional Resources Leveraging Governance to Sustain Enterprise Architecture Efforts, an Oracle white paper by Eric Stephens. SOA, Cloud, and Service Technologies, a transcript of an ArchBeat interview with Thomas Erl, Tim Hall, and Demed L'Her, in which Tim says the following about governance: "For a long time people have argued that SOA governance is sort of an awkward name, no one wanted to be audited. There's 50% of the world that think, yes, we're going to have to tops down initiative to address this and there's 50% of the world that says that it feels like a heavy weight process that I want no part of. So what I think we should do is change the name…"

    Read the article

  • CodePlex Daily Summary for Thursday, September 13, 2012

    CodePlex Daily Summary for Thursday, September 13, 2012Popular ReleasesAustralia Income and Tax Calculator: Australia Income and Tax Calculator: first release, can calculate net income, tax, quarterly/monthly/weekly/daily and hourly taxable/net ratedatajs - JavaScript Library for data-centric web applications: datajs version 1.1.0-beta: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...SharePoint (2010) Farm Backup: PowerShell SharePoint (2010) Farm Backup v2.2: Version 2.2 Changelog - Added the ability to export Solutions (WSP) from solution gallery. - Added the ability to exclude MySites from the sites backup. - Added Is-Foundation method to determine whether SharePoint edition is Foundation, Standard or Enterprise to prevent errors when running script on SharePoint Foundation 2010 as Foundation does not have MySite functionality. - Added method to determine amount of storage required for sites backup. Script will now determine total required fo...Metadata Document Generator for Microsoft Dynamics CRM 2011: Metadata Document Generator (2.0.325.117): Add latest version of McTools.Xrm.Connection library to correct Office 365 authentication supportLakana - WPF Framework: Lakana V2: Lakana V2 contains : - Lakana WPF Forms (with sample project) - Lakana WPF Navigation (with sample project)Microsoft SQL Server Product Samples: Database: OData QueryFeed workflow activity: The OData QueryFeed sample activity shows how to create a workflow activity that consumes an OData resource, and renders entity properties in a Microsoft Excel 2010 worksheet or Microsoft Word 2010 document. Using the sample QueryFeed activity, you can consume any OData resource. The sample activity uses LINQ to project OData metadata into activity designer expression items. By setting activity expressions, a fully qualified OData query string is constructed consisting of Resource, Filter, Or...Arduino for Visual Studio: Arduino 1.x for Visual Studio 2012, 2010 and 2008: Register for the visualmicro.com forum for more news and updates Version 1209.10 includes support for VS2012 and minor fixes for the Arduino debugger beta test team. Version 1208.19 is considered stable for visual studio 2010 and 2008. If you are upgrading from an older release of Visual Micro and encounter a problem then uninstall "Visual Micro for Arduino" using "Control Panel>Add and Remove Programs" and then run the install again. Key Features of 1209.10 Support for Visual Studio 2...Microsoft Script Explorer for Windows PowerShell: Script Explorer Reference Implementation(s): This download contains Source Code and Documentation for Script Explorer DB Reference Implementation. You can create your own provider and use it in Script Explorer. Refer to the documentation for more information. The source code is provided "as is" without any warranty. Read the Readme.txt file in the SourceCode.Social Network Importer for NodeXL: SocialNetImporter(v.1.5): This new version includes: - Fixed the "resource limit" bug caused by Facebook - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuAcDown????? - AcDown Downloader Framework: AcDown????? v4.1: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??...Move Mouse: Move Mouse 2.5.2: FIXED - Minor fixes and improvements.MVC Controls Toolkit: Mvc Controls Toolkit 2.3: Added The new release is compatible with Mvc4 RTM. Support for handling Time Zones in dates. Specifically added helper methods to convert to UTC or local time all DateTimes contained in a model received by a controller, and helper methods to handle date only fileds. This together with a detailed documentation on how TimeZones are handled in all situations by the Asp.net Mvc framework, will contribute to mitigate the nightmare of dates and timezones. Multiple Templates, and more options to...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.00: Maintenance Release Changes on Metro7 06.02.00 Fixed width and height on the jQuery popup for the Editor. Navigation Provider changed to DDR menu Added menu files and scripts Changed skins to Doctype HTML Changed manifest to dnn6 manifest file Changed License to HTML view Fixed issue on Metro7/PinkTitle.ascx with double registering of the Actions Changed source folder structure and start folder, so the project works with the default DNN structure on developing Added VS 20...Xenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.9.0: Release Notes Imporved framework architecture Improved the framework security More import/export formats and operations New WebPortal application which includes forum, new, blog, catalog, etc. UIs Improved WebAdmin app. Reports, navigation and search Perfomance optimization Improve Xenta.Catalog domain More plugin interfaces and plugin implementations Refactoring Windows Azure support and much more... Package Guide Source Code - package contains the source code Binaries...Json.NET: Json.NET 4.5 Release 9: New feature - Added JsonValueConverter New feature - Set a property's DefaultValueHandling to Ignore when EmitDefaultValue from DataMemberAttribute is false Fix - Fixed DefaultValueHandling.Ignore not igoring default values of non-nullable properties Fix - Fixed DefaultValueHandling.Populate error with non-nullable properties Fix - Fixed error when writing JSON for a JProperty with no value Fix - Fixed error when calling ToList on empty JObjects and JArrays Fix - Fixed losing deci...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.66: Just going to bite the bullet and rip off the band-aid... SEMI-BREAKING CHANGE! Well, it's a BREAKING change to those who already adjusted their projects to use the previous breaking change's ill-conceived renamed DLLs (versions 4.61-4.65). For those who had not adapted and were still stuck in this-doesn't-work-please-fix-me mode, this is more like a fixing change. The previous breaking change just broke too many people, I'm sorry to say. Renaming the DLL from AjaxMin.dll to AjaxMinLibrary.dl...DotNetNuke® Community Edition CMS: 07.00.00 CTP (Not for Production Use): NOTE: New Minimum Requirementshttp://www.dotnetnuke.com/Portals/25/Blog/Files/1/3418/Windows-Live-Writer-1426fd8a58ef_902C-MinimumVersionSupport_2.png Simplified InstallerThe first thing you will notice is that the installer has been updated. Not only have we updated the look and feel, but we also simplified the overall install process. You shouldn’t have to click through a series of screens in order to just get your website running. With the 7.0 installer we have taken an approach that a...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.2: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BIDS Helper: BIDS Helper 1.6.1: In addition to fixing a number of bugs that beta testers reported, this release includes the following new features for Tabular models in SQL 2012: New Features: Tabular Display Folders Tabular Translations Editor Tabular Sync Descriptions Fixed Issues: Biml issues 32849 fixing bug in Tabular Actions Editor Form where you type in an invalid action name which is a reserved word like CON or which is a duplicate name to another action 32695 - fixing bug in SSAS Sync Descriptions whe...Code Snippets for Windows Store Apps: Code Snippets for Windows Store Apps: First release of our snippets! For more information: Installation List of Snippets Minor update 9/13: Updated C# and VB packages -- Converted from VSI installers to ZIP files for easier usage with Visual Studio Express editions. Snippets contained in each package were not altered.New Projectsanother hello world: a very quick test.Atorpat Marquee: This is the advanced marquee pro moduleAustralia Income and Tax Calculator: Calculates australian net income, tax, ratesAuto generate C# DAL, BLL classes and Sql Store Procedures: This program helps to you for auto generate store procedures for Sql and DAL, BLL classes for C# without any extra code.BugSystem: bug systemBuild and Deploy Tool using BTDF: This tool can be used by a build and release manager who can prepare the BizTalk MSI and deploy the application in the corresponding environment. Calculation WebApplication: Calc web appChild&Family Brigade®: This Software, is specially realized for the Family Brigade of Cochabamba Bolivia, this is a nonprofit institution, that helps family's problems.Creative Style System: This is our Sheridan College Capstone project. Bitches.Customizable Process Guidance Content for VS ALM 2012: Customizable process guidance is provided for each of the default process templates that VS ALM TFS 2012 provides. DER_Autoit: 2012-9-13-14-10 ?????!Dynamics Xrm Application Speed Builder: The Dynamics Xrm Application Speed Builder will analyze databases, then create the entities in CRM, attributes, and forms for you. Feed Discovery: Want to subscribe to a web page and can't find the newsfeed? Just rightclick on the page and discover! Subscribe directly in IE, Google Reader or any other.Inmeta Tools for Visual Studio 2012 and TFS 2012: Info comingInventory Manager: Inventory Manager is a small demo project that lets you manage your items.Kayvon's Group: projectsLibreta: Something about LibretaMultiple Image choice custom field type: This solution contains "Custom Field Type" which allows the user to choose multiple images as a choice.PHP-Edin: Php kurs PROJETO PET: Um ambiente social para adoção e apreciação de animais.Read the Reader: Read the Reader is a lightweight Google Reader Client. It runs in the background and tells you, when something's happening.SharePoint 2010 File Recovery: A little utility program to allow you to easily recover files from your SharePoint 2010 content database backupsSistema para estudo do mvc: Estudando asp.mvcSports Center Asp.net MVC Demo: Sample Sports Center Asp.net MVC Project. Good start up kit for getting in to various feature of Asp.net MVC offering. testtom08092012git01: bvcT-SQL implementation of Standard Distribution PDF and CDF: Files for blog post at http://formaldev.blogspot.com/2012/09/T-SQL-NORMDIST-1.htmlWunderlist.com Shortcut Google Chrome Extension: Just exactly that, a shortcut to Wunderlist.com

    Read the article

  • Can't seem to install the correct version of PHP, using apt-get install

    - by Mark Tomlin
    I'm using Ubuntu 11.04 LTS server, it's a new install on a VPS box from MediaTemple (their ve server). I'm trying to install PHP 5.4.3 on this box, but I'm having a common problem not matter what version I get. I'm trying to get the php-cgi process, so I do the command apt-get install php-cgi and that installs, but does not provide me with the php-cgi executable. I need this so that I can run php-cgi -b 127.0.0.1:9000 and have it has a FastCGI process for my nginx install. Any idea what I can do to get this to work, bonus points if you can get this to work with PHP 5.4.3, because all I can seem to install is PHP 5.3.5.

    Read the article

  • Automatic LaTex document generation from Excel spreadsheet

    - by Bowler
    I have some data in an excel file from which I have to generate a report. I repeat this task fairly regularly and am looking to automate it. I have a LaTeX project into which I usually just copy data by hand, export the necessary worksheets as pdfs and add them to my LaTeX project and compile with pdflatex. It has occured to me that there must be a way to automate this process. Is there an efficient way to export the data from excel and into a LaTeX project, possibly a vba script in excel could run the process? Also, it doesn't have to be LaTeX, I'm not all that experienced with MS office's more advanced features is there some way akin to a mail merge that I could achieve this with? In some ways this might be better in case I have to pass the work on to someone who doesn't know LaTeX. Thanks.

    Read the article

  • How to move a windows machine properly from RAID 1 to raid 10?

    - by goober
    Goal I would like to add two more hard drives to my current RAID 1 setup and create a RAID 0 setup on top of the two RAID 1 setups (which I believe is referred to as "RAID 10"). Components Involved Intel P68 Chipset Motherboard 4 SATA ports that can be configured for Raid An intel SSD cache that sits in front of the RAID, and a 64 GB SSD configured in that manner Two 1TB HDDs configured in RAID 1 OS: Windows 7 Professional Resources Consulted so far I found a great resource on LinuxQuestions.org for a good "best practices" process for Linux machines, but I'd like to develop a similar process that I know works on Windows Machines.

    Read the article

  • Trouble installing Ubuntu 12.04 from USB

    - by Kyle J
    I want to dual-boot Ubuntu Desktop 12.04 on my new ultrabook which has an Intel i7 3517U processor 6GB RAM Windows 7, 64-bit no CD/DVD drive I created my bootable USB stick using pendrivelinux.com with the "ubuntu-12.04.1-desktop-i386.iso". I am following these directions because they include nice screenshots; however, I do not get very far in the process. I am able to boot into the Live Desktop, and then I try to install onto my hard disk. Here are the series of actions that I take next: First, I see this ( http://i.imgur.com/vucYH ) window, and click 'continue' Then I get this ( http://imgur.com/2wESc ) window, and click 'continue' again This appears: and I get worried because it seems like there is no recognition that I have Windows installed. According to the directions I am following, I should see /dev/sda1 and /dev/sda2 partitions. In the drop-down menu at the bottom the only "Device for boot loader installation" is /dev/sdb and no information is shown. I am hesitant to click 'Install Now' for fear of what it might do to Windows. 4. I click 'Quit' and cancel the installation, but then about 5 seconds later this ( http://imgur.com/a/yXi0C ) window pops up (I have expanded it to full screen to scroll and show all the details). 5. Another second later this ( http://imgur.com/vxcrN ) comes up. I'm not sure how relevant this is. Does anyone have any insight into this issue?? Why does it not show my current Windows partition? What would happen if I tried to continue with the installation process? Thanks! PS - sorry, it would only let me post 2 hyperlinks as a new user

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >