Search Results

Search found 12422 results on 497 pages for 'non disclosure agreement'.

Page 83/497 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Total network data sent/received of a non-daemon Linux process?

    - by leden
    I'm looking for a simple and effective way of measuring total bytes received/sent from a single process upon its termination. Basically, I am looking for a tool which has the interface similar to "time" and "/usr/bin/time", e.g. measure-net-data <prog_to_run> <prog_args> Received (b): XYZ Sent (b): ABC I know that there are many tools for bandwidth/network monitoring, but as far I can tell all of them are performing the measurements it real-time, which is inappropriate not only because of overhead but also because of the inconvenience - I would need to stop the program, capture the output of the tool and then kill it. I have seen that newer versions of Linux 2.6.20+ provide /proc/<pid>/io/ which contain the information I'm looking for; however, everything under /proc/<pid> when the process terminates, so I'm again back to the same problem as with any network monitoring tool.

    Read the article

  • Detecting when Bluetooth is disabled on iOS5

    - by Non Umemoto
    I'm developing blog speaker app. I wanna pause the audio when bluetooth is disabled like iPod app. I thought it's not possible without using private api after reading this. Check if Bluetooth is Enabled? But, my customer told me that Rhapsody and DI Radio apps both support it. Then I found iOS5 has Core Bluetooth framework. https://developer.apple.com/library/ios/documentation/CoreBluetooth/Reference/CoreBluetooth_Framework/CoreBluetooth_Framework.pdf CBCentralManagerStatePoweredOff status seems like the one. But, the description says this api only supports Bluetooth 4.0 low energy devices. Did anyone try doing the same thing? I want to support current popular bluetooth headsets, or bluetooth enabled steering wheel on the car. I don't know if it's worth trying when it only supports some brand new bluetooth.

    Read the article

  • Weirdest PHP error

    - by yes123
    I found in my error log this [12-Nov-2011 19:22:26] PHP Warning: Attempt to assign property of non-object in on line 1768776801 [12-Nov-2011 19:22:31] PHP Warning: Attempt to assign property of non-object in on line 1768776801 [12-Nov-2011 19:22:39] PHP Warning: Attempt to assign property of non-object in on line 1768776801 [12-Nov-2011 19:22:40] PHP Warning: Attempt to assign property of non-object in on line 1768776801 [12-Nov-2011 19:22:46] PHP Warning: Attempt to assign property of non-object in on line 1768776801 [12-Nov-2011 19:22:51] PHP Warning: Attempt to assign property of non-object in on line 1768776801 Of course I don't have any script with 1 bn lines. PHP Version 5.3.3-7 Apache 2 The other weird thing is that I have a set_error_handler( 'myHandler' ); To write in the error log other inforamtion too, but with this error it seems PHP just ignores my error_handler =/ I don't have any code that can generate this errore before my call to set_error_handler

    Read the article

  • What variable dictates position of non-focused elements in the roundabout plugin?

    - by kristina childs
    Part of the problem here is that i'm not sure what the best language to use in order to find the solution. I search and searched so please forgive if this is already a thread somewhere. I'm using the roundabout plugin to cycle through 3 divs. Each div is 794px wide, which makes the roundabout-in-focus element 794 and the two not in focus 315.218px wide, positioned so half of each is hidden by the in-focus div. This is all well and good, however the total width of the display needs to stay within 1000px (ideally 980px, but i can fudge if need be.) Basically I want to make the non-focused divs be 3/4 hidden by the in-focus div but for the life of me can't figure out what variables i need to edit in order to do it. Unfortunately it's not one of the many easily-changed options like z-index and minScale. i tried minScale but it's clear this isn't going to work. the plugin outputs this code: <li class="roundabout-moveable-item" style="position: absolute; left: -57px; top: 205px; width: 319.982px; height: 149.513px; opacity: 0.7; z-index: 146; font-size: 5.6px;"> i need to find out what changes the left positioning so it's shifted closer to the center of the stage, like this: <li class="roundabout-moveable-item" style="position: absolute; left: 5px; top: 205px; width: 319.982px; height: 149.513px; opacity: 0.7; z-index: 146; font-size: 5.6px;"> i tried playing with the positioning functions of the plugin but all that did was shift everything in tandem left or right. any help is greatly appreciated. this site is going to be awesome once i figure out all this jquery stuff! here is a link to my .js file: http://avalon.eaw.com/scripts/jquery.roundabout2.js i've got an overflow:hidden on the to help guide the positioning of those no-focused items.

    Read the article

  • How do I get through proxy server environments for non-standard services?

    - by Ripred
    I'm not real hip on exactly what role(s) today's proxy servers can play and I'm learning so go easy on me :-) I have a client/server system I have written using a homegrown protocol and need to enhance the client side to negotiate its way out of a proxy environment. I have an existing client and server system written in C and C++ for the speed and a small amount of MFC in the client to handle the user interface. I have written both the server and client side of the system on Windows (the people I work for are mainly web developers using Windows everything - not a choice) sticking to Berkeley Sockets as it were via wsock32 for efficiency. The clients connect to the server through a nonstandard port (even though using port 80 is an option to get out of some environments but the protocol that goes over it isn't HTTP). The TCP connection(s) stay open for the duration of the clients participation in real time conferences. Our customer base is expanding to all kinds of networked environments. I have been able to solve a lot of problems by adding the ability to connect securely over port 443 and using secure sockets which allows the protocol to pass through a lot environments since the internal packets can't be sniffed. But more and more of our customers are behind a proxy server environment and my direct connections don't make it through. My old school understanding of proxy servers is that they act as a proxy for external HTML content over HTTP, possibly locally caching popular material for faster local access, and also allowing their IT staff to blacklist certain destination sites. Customer are complaining that my software doesn't recognize and easily navigate its way through their proxy environments but I'm finding it difficult to decide what my "best fit" solution should be. My software doesn't tear down the connection after each client request, and on top of that packets can come from either side at any time, basically your typical custom client/server system for a specific niche. My first reaction is "why can't they just add my servers addresses to their white list" but if there is a programmatic way I can get through without requiring their IT staff to help it is politically better and arguably a better solution anyway. Plus maybe I'm still not understanding the role and purpose of what proxy servers and environments have grown to be these days. My first attempt at a solution was to use WinInet with its various proxy capabilities to establish a connection over port 80 to my non-standard protocol server (which knows enough to recognize and answer a simple HTTP-looking GET request and answer it with a simple HTTP response page to get around some environments that employ initial packet sniffing (DPI)). I retrieved the actual SOCKET handle behind WinInet's HINTERNET request object and had hoped to use that in place of my software's existing SOCKET connection and hopefully not need to change much more on the client side. It initially seemed to be my solution but on further inspection it seems that the OS gets first-chance at the received data on this socket since when I get notified of events via the standard select(...) statement on the socket and query the size of the data available via ioctlsocket the call succeeds but returns 0 bytes available, the reads don't work and it goes downhill from there. Can someone tell me of a client-side library (commercial is fine) will let me get past these proxy server environments with as little user and IT staff help as possible? From what I read it has grown past SOCKS and I figure someone has to have solved this problem before me. Thanks for reading my long-winded question, Ripred

    Read the article

  • TypeError: Error #2007: Parameter child must be non-null.

    - by Bobby Francis Joseph
    I am running the following piece of code: package { import fl.controls.Button; import fl.controls.Label; import fl.controls.RadioButton; import fl.controls.RadioButtonGroup; import flash.display.Sprite; import flash.events.MouseEvent; import flash.text.TextFieldAutoSize; public class RadioButtonExample extends Sprite { private var j:uint; private var padding:uint = 10; private var currHeight:uint = 0; private var verticalSpacing:uint = 30; private var rbg:RadioButtonGroup; private var questionLabel:Label; private var answerLabel:Label; private var question:String = "What day is known internationally as Speak Like A Pirate Day?"; private var answers:Array = [ "August 12", "March 4", "September 19", "June 22" ]; public function RadioButtonExample() { setupQuiz(); } private function setupQuiz():void { setupQuestionLabel(); setupRadioButtons(); setupButton(); setupAnswerLabel(); } private function setupQuestionLabel():void { questionLabel = new Label(); questionLabel.text = question; questionLabel.autoSize = TextFieldAutoSize.LEFT; questionLabel.move(padding, padding + currHeight); currHeight += verticalSpacing; addChild(questionLabel); } private function setupAnswerLabel():void { answerLabel = new Label(); answerLabel.text = ""; answerLabel.autoSize = TextFieldAutoSize.LEFT; answerLabel.move(padding + 120, padding + currHeight); addChild(answerLabel); } private function setupRadioButtons():void { rbg = new RadioButtonGroup("question1"); createRadioButton(answers[0], rbg); createRadioButton(answers[1], rbg); createRadioButton(answers[2], rbg); createRadioButton(answers[3], rbg); } private function setupButton():void { var b:Button = new Button(); b.move(padding, padding + currHeight); b.label = "Check Answer"; b.addEventListener(MouseEvent.CLICK, checkAnswer); addChild(b); } private function createRadioButton(rbLabel:String, rbg:RadioButtonGroup):void { var rb:RadioButton = new RadioButton(); rb.group = rbg; rb.label = rbLabel; rb.move(padding, padding + currHeight); addChild(rb); currHeight += verticalSpacing; } private function checkAnswer(e:MouseEvent):void { if (rbg.selection == null) { return; } var resultStr:String = (rbg.selection.label == answers[2]) ? "Correct" : "Incorrect"; answerLabel.text = resultStr; } } } This is from Adobe Livedocs. http://www.adobe.com/livedocs/flash/9.0/ActionScriptLangRefV3/ I have added a Radiobutton to stage and then deleted it so that its there in the library. However I am getting the following error TypeError: Error #2007: Parameter child must be non-null. at flash.display::DisplayObjectContainer/addChildAt() at fl.controls::BaseButton/fl.controls:BaseButton::drawBackground() at fl.controls::LabelButton/fl.controls:LabelButton::draw() at fl.controls::Button/fl.controls:Button::draw() at fl.core::UIComponent/::callLaterDispatcher() Can anyone tell me what is going on and how to remove this error.

    Read the article

  • What is the fastest way to Initialize a multi-dimensional array to non-default values in .NET?

    - by AMissico
    How do I initialize a multi-dimensional array of a primitive type as fast as possible? I am stuck with using multi-dimensional arrays. My problem is performance. The following routine initializes a 100x100 array in approx. 500 ticks. Removing the int.MaxValue initialization results in approx. 180 ticks just for the looping. Approximately 100 ticks to create the array without looping and without initializing to int.MaxValue. Routines similiar to this are called a few hundred-thousand to several million times during a "run". The array size will not change during a run and arrays are created one-at-a-time, used, then discarded, and a new array created. A "run" which may last from one minute (using 10x10 arrays) to forty-five minutes (100x100). The application creates arrays of int, bool, and struct. There can be multiple "runs" executing at same time, but are not because performance degrades terribly. I am using 100x100 as a base-line. I am open to suggestions on how to optimize this non-default initialization of an array. One idea I had is to use a smaller primitive type when available. For instance, using byte instead of int, saves 100 ticks. I would be happy with this, but I am hoping that I don't have to change the primitive data type. public int[,] CreateArray(Size size) { int[,] array = new int[size.Width, size.Height]; for (int x = 0; x < size.Width; x++) { for (int y = 0; y < size.Height; y++) { array[x, y] = int.MaxValue; } } return array; } Down to 450 ticks with the following: public int[,] CreateArray1(Size size) { int iX = size.Width; int iY = size.Height; int[,] array = new int[iX, iY]; for (int x = 0; x < iX; x++) { for (int y = 0; y < iY; y++) { array[x, y] = int.MaxValue; } } return array; } Down to approximately 165 ticks after a one-time initialization of 2800 ticks. (See my answer below.) If I can get stackalloc to work with multi-dimensional arrays, I should be able to get the same performance without having to intialize the private static array. private static bool _arrayInitialized5; private static int[,] _array5; public static int[,] CreateArray5(Size size) { if (!_arrayInitialized5) { int iX = size.Width; int iY = size.Height; _array5 = new int[iX, iY]; for (int x = 0; x < iX; x++) { for (int y = 0; y < iY; y++) { _array5[x, y] = int.MaxValue; } } _arrayInitialized5 = true; } return (int[,])_array5.Clone(); } Down to approximately 165 ticks without using the "clone technique" above. (See my answer below.) I am sure I can get the ticks lower, if I can just figure out the return of CreateArray9. public unsafe static int[,] CreateArray8(Size size) { int iX = size.Width; int iY = size.Height; int[,] array = new int[iX, iY]; fixed (int* pfixed = array) { int count = array.Length; for (int* p = pfixed; count-- > 0; p++) *p = int.MaxValue; } return array; }

    Read the article

  • How to revert-back from SSL to non-SSL in Tomcat 6 ?

    - by mohamida
    I'm using jsf 2 + jaas + ssl + tomcat 6.0.26 I have in my web site 2 paths: /faces/protected/* which uses SSL /faces/unprotected/* which don't uses SSL. I've put this in my web.xml: <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page>/faces/login.jsp</form-login-page> <form-error-page>/faces/error.jsp</form-error-page> </form-login-config> </login-config> <security-constraint> <web-resource-collection> <web-resource-name>Secure Resource</web-resource-name> <description/> <url-pattern>/faces/unprotected/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>HEAD</http-method> <http-method>PUT</http-method> <http-method>OPTIONS</http-method> <http-method>TRACE</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <role-name>C</role-name> </auth-constraint> </security-constraint> <security-constraint> <web-resource-collection> <web-resource-name>Secure Resource</web-resource-name> <description /> <url-pattern>/faces/protected/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>HEAD</http-method> <http-method>PUT</http-method> <http-method>OPTIONS</http-method> <http-method>TRACE</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <role-name>C</role-name> </auth-constraint> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> <security-role> <description> Role Client </description> <role-name>C</role-name> </security-role> and this is my server.xml: <Connector port="8080" protocol="HTTP/1.1" maxThreads="400" maxKeepAliveRequests="1" acceptCount="100" connectionTimeout="3000" redirectPort="8443" compression="on" compressionMinSize="2048" noCompressionUserAgents="gozilla, traviata" compressableMimeType="text/javascript,text/css,text/html, text/xml,text/plain,application/x-javascript,application/javascript,application/xhtml+xml" /> <Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol" SSLEnabled="true" maxThreads="400" scheme="https" secure="true" clientAuth="optional" sslProtocol="TLS" SSLCertificateFile="path/to/crt" SSLCertificateKeyFile="path/to/pem"/> when i enter to protected paths, it switches to HTTPS (port 8443), but when i enter to path /faces/unprotected/somthing... it stays using HTTPS. what i want is when i enter to unprotected paths, it revert-back to non-SSL communications ( otherwise, i have to re-login again when i set the exact adress in my browser). What's wrong with my configurations ? Is there a way so i can do such a thing ?

    Read the article

  • Non recursive way to position a genogram in 2D points for x axis. Descendant are below

    - by Nassign
    I currently was tasked to make a genogram for a family consisting of siblings, parents with aunts and uncles with grandparents and greatgrandparents for only blood relatives. My current algorithm is using recursion. but I am wondering how to do it in non recursive way to make it more efficient. it is programmed in c# using graphics to draw on a bitmap. Current algorithm for calculating x position, the y position is by getting the generation number. public void StartCalculatePosition() { // Search the start node (The only node with targetFlg set to true) Person start = null; foreach (Person p in PersonDic.Values) { if (start == null) start = p; if (p.Targetflg) { start = p; break; } } CalcPositionRecurse(start); // Normalize the position (shift all values to positive value) // Get the minimum value (must be negative) // Then offset the position of all marriage and person with that to make it start from zero float minPosition = float.MaxValue; foreach (Person p in PersonDic.Values) { if (minPosition > p.Position) { minPosition = p.Position; } } if (minPosition < 0) { foreach (Person p in PersonDic.Values) { p.Position -= minPosition; } foreach (Marriage m in MarriageList) { m.ParentsPosition -= minPosition; m.ChildrenPosition -= minPosition; } } } /// <summary> /// Calculate position of genogram using recursion /// </summary> /// <param name="psn"></param> private void CalcPositionRecurse(Person psn) { // End the recursion if (psn.BirthMarriage == null || psn.BirthMarriage.Parents.Count == 0) { psn.Position = 0.0f; if (psn.BirthMarriage != null) { psn.BirthMarriage.ParentsPosition = 0.0f; psn.BirthMarriage.ChildrenPosition = 0.0f; } CalculateSiblingPosition(psn); return; } // Left recurse if (psn.Father != null) { CalcPositionRecurse(psn.Father); } // Right recurse if (psn.Mother != null) { CalcPositionRecurse(psn.Mother); } // Merge Position if (psn.Father != null && psn.Mother != null) { AdjustConflict(psn.Father, psn.Mother); // Position person in center of parent psn.Position = (psn.Father.Position + psn.Mother.Position) / 2; psn.BirthMarriage.ParentsPosition = psn.Position; psn.BirthMarriage.ChildrenPosition = psn.Position; } else { // Single mom or single dad if (psn.Father != null) { psn.Position = psn.Father.Position; psn.BirthMarriage.ParentsPosition = psn.Position; psn.BirthMarriage.ChildrenPosition = psn.Position; } else if (psn.Mother != null) { psn.Position = psn.Mother.Position; psn.BirthMarriage.ParentsPosition = psn.Position; psn.BirthMarriage.ChildrenPosition = psn.Position; } else { // Should not happen, checking in start of function } } // Arrange the siblings base on my position (left younger, right older) CalculateSiblingPosition(psn); } private float GetRightBoundaryAncestor(Person psn) { float rPos = psn.Position; // Get the rightmost position among siblings foreach (Person sibling in psn.Siblings) { if (sibling.Position > rPos) { rPos = sibling.Position; } } if (psn.Father != null) { float rFatherPos = GetRightBoundaryAncestor(psn.Father); if (rFatherPos > rPos) { rPos = rFatherPos; } } if (psn.Mother != null) { float rMotherPos = GetRightBoundaryAncestor(psn.Mother); if (rMotherPos > rPos) { rPos = rMotherPos; } } return rPos; } private float GetLeftBoundaryAncestor(Person psn) { float rPos = psn.Position; // Get the rightmost position among siblings foreach (Person sibling in psn.Siblings) { if (sibling.Position < rPos) { rPos = sibling.Position; } } if (psn.Father != null) { float rFatherPos = GetLeftBoundaryAncestor(psn.Father); if (rFatherPos < rPos) { rPos = rFatherPos; } } if (psn.Mother != null) { float rMotherPos = GetLeftBoundaryAncestor(psn.Mother); if (rMotherPos < rPos) { rPos = rMotherPos; } } return rPos; } /// <summary> /// Check if two parent group has conflict and compensate on the conflict /// </summary> /// <param name="leftGroup"></param> /// <param name="rightGroup"></param> public void AdjustConflict(Person leftGroup, Person rightGroup) { float leftMax = GetRightBoundaryAncestor(leftGroup); leftMax += 0.5f; float rightMin = GetLeftBoundaryAncestor(rightGroup); rightMin -= 0.5f; float diff = leftMax - rightMin; if (diff > 0.0f) { float moveHalf = Math.Abs(diff) / 2; RecurseMoveAncestor(leftGroup, 0 - moveHalf); RecurseMoveAncestor(rightGroup, moveHalf); } } /// <summary> /// Recursively move a person and all his/her ancestor /// </summary> /// <param name="psn"></param> /// <param name="moveUnit"></param> public void RecurseMoveAncestor(Person psn, float moveUnit) { psn.Position += moveUnit; foreach (Person siblings in psn.Siblings) { if (siblings.Id != psn.Id) { siblings.Position += moveUnit; } } if (psn.BirthMarriage != null) { psn.BirthMarriage.ChildrenPosition += moveUnit; psn.BirthMarriage.ParentsPosition += moveUnit; } if (psn.Father != null) { RecurseMoveAncestor(psn.Father, moveUnit); } if (psn.Mother != null) { RecurseMoveAncestor(psn.Mother, moveUnit); } } /// <summary> /// Calculate the position of the siblings /// </summary> /// <param name="psn"></param> /// <param name="anchor"></param> public void CalculateSiblingPosition(Person psn) { if (psn.Siblings.Count == 0) { return; } List<Person> sibling = psn.Siblings; int argidx; for (argidx = 0; argidx < sibling.Count; argidx++) { if (sibling[argidx].Id == psn.Id) { break; } } // Compute position for each brother that is younger that person int idx; for (idx = argidx - 1; idx >= 0; idx--) { sibling[idx].Position = sibling[idx + 1].Position - 1; } for (idx = argidx + 1; idx < sibling.Count; idx++) { sibling[idx].Position = sibling[idx - 1].Position + 1; } }

    Read the article

  • Default /etc/apt/sources.list?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • PPA causing 404 error?

    - by piemesons
    I need default source list for ubuntu 10.04. Can anybody help me? Here is Mine:--- Ubuntu supported packages deb http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid main restricted multiverse universe deb-src http://archive.ubuntu.com/ubuntu/ lucid-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted multiverse universe deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted universe multiverse deb-src http://security.ubuntu.com/ubuntu lucid-proposed main restricted universe multiverse Canonical Commercial Repository deb http://archive.canonical.com/ubuntu lucid partner deb http://archive.canonical.com/ubuntu lucid-backports partner deb http://archive.canonical.com/ubuntu lucid-updates partner deb http://archive.canonical.com/ubuntu lucid-security partner deb http://archive.canonical.com/ubuntu lucid-proposed partner deb-src http://archive.canonical.com/ubuntu lucid partner deb-src http://archive.canonical.com/ubuntu lucid-backports partner deb-src http://archive.canonical.com/ubuntu lucid-updates partner deb-src http://archive.canonical.com/ubuntu lucid-security partner deb-src http://archive.canonical.com/ubuntu lucid-proposed partner medibuntu deb http://packages.medibuntu.org/ lucid free non-free deb-src http://packages.medibuntu.org/ lucid free non-free PlayOnLinux deb http://deb.playonlinux.com/ lucid main opera deb http://deb.opera.com/opera/ lenny non-free google deb http://dl.google.com/linux/deb/ stable non-free main Dropbox Official Source deb http://linux.dropbox.com/ubuntu karmic main Skype deb http://download.skype.com/linux/repos/debian/ stable non-free This is the error i am getting:-- (sudo apt-get update) Get:9 http://dl.google.com stable/main Packages [1,076B] Err http://ppa.launchpad.net lucid/main Packages 404 Not Found Get:10 http://dl.google.com stable/main Packages [735B] and finally :-- Fetched 9,724B in 3s (2,645B/s) W: Failed to fetch http://ppa.launchpad.net/bisig/ppa/ubuntu/dists/lucid/main/binary-i386/Packages.gz 404 Not Found E: Some index files failed to download, they have been ignored, or old ones used instead.

    Read the article

  • Something for the weekend - Whats the most complex query?

    - by simonsabin
    Whenever I teach about SQL Server performance tuning I try can get across the message that there is no such thing as a table. Does that sound odd, well it isn't, trust me. Rather than tables you need to consider structures. You have 1. Heaps 2. Indexes (b-trees) Some people split indexes in two, clustered and non-clustered, this I feel confuses the situation as people associate clustered indexes with sorting, but don't associate non clustered indexes with sorting, this is wrong. Clustered and non-clustered indexes are the same b-tree structure(and even more so with SQL 2005) with the leaf pages sorted in a linked list according to the keys of the index.. The difference is that non clustered indexes include in their structure either, the clustered key(s), or the row identifier for the row in the table (see http://sqlblog.com/blogs/kalen_delaney/archive/2008/03/16/nonclustered-index-keys.aspx for more details). Beyond that they are the same, they have key columns which are stored on the root and intermediary pages, and included columns which are on the leaf level. The reason this is important is that this is how the optimiser sees the world, this means it can use any of these structures to resolve your query. Even if your query only accesses one table, the optimiser can access multiple structures to get your results. One commonly sees this with a non-clustered index scan and then a key lookup (clustered index seek), but importantly it's not restricted to just using one non-clustered index and the clustered index or heap, and that's the challenge for the weekend. So the challenge for the weekend is to produce the most complex single table query. For those clever bods amongst you that are thinking, great I will just use lots of xquery functions, sorry these are the rules. 1. You have to use a table from AdventureWorks (2005 or 2008) 2. You can add whatever indexes you like, but you must document these 3. You cannot use XQuery, Spatial, HierarchyId, Full Text or any open rowset function. 4. You can only reference your table once, i..e a FROM clause with ONE table and no JOINs 5. No Sub queries. The aim of this is to show how the optimiser can use multiple structures to build the results of a query and to also highlight why the optimiser is doing that. How many structures can you get the optimiser to use? As an example create these two indexes on AdventureWorks2008 create index IX_Person_Person on Person.Person (lastName, FirstName,NameStyle,PersonType) create index IX_Person_Person on Person.Person(BusinessentityId,ModifiedDate)with drop_existing    select lastName, ModifiedDate   from Person.Person  where LastName = 'Smith' You will see that the optimiser has decided to not access the underlying clustered index of the table but to use two indexes above to resolve the query. This highlights how the optimiser considers all storage structures, clustered indexes, non clustered indexes and heaps when trying to resolve a query. So are you up to the challenge for the weekend to produce the most complex single table query? The prize is a pdf version of a popular SQL Server book, or a physical book if you live in the UK.  

    Read the article

  • Beware Sneaky Reads with Unique Indexes

    - by Paul White NZ
    A few days ago, Sandra Mueller (twitter | blog) asked a question using twitter’s #sqlhelp hash tag: “Might SQL Server retrieve (out-of-row) LOB data from a table, even if the column isn’t referenced in the query?” Leaving aside trivial cases (like selecting a computed column that does reference the LOB data), one might be tempted to say that no, SQL Server does not read data you haven’t asked for.  In general, that’s quite correct; however there are cases where SQL Server might sneakily retrieve a LOB column… Example Table Here’s a T-SQL script to create that table and populate it with 1,000 rows: CREATE TABLE dbo.LOBtest ( pk INTEGER IDENTITY NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( some_value, lob_data ) SELECT TOP (1000) N.n, @Data FROM Numbers N WHERE N.n <= 1000; Test 1: A Simple Update Let’s run a query to subtract one from every value in the some_value column: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; As you might expect, modifying this integer column in 1,000 rows doesn’t take very long, or use many resources.  The STATITICS IO and TIME output shows a total of 9 logical reads, and 25ms elapsed time.  The query plan is also very simple: Looking at the Clustered Index Scan, we can see that SQL Server only retrieves the pk and some_value columns during the scan: The pk column is needed by the Clustered Index Update operator to uniquely identify the row that is being changed.  The some_value column is used by the Compute Scalar to calculate the new value.  (In case you are wondering what the Top operator is for, it is used to enforce SET ROWCOUNT). Test 2: Simple Update with an Index Now let’s create a nonclustered index keyed on the some_value column, with lob_data as an included column: CREATE NONCLUSTERED INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); This is not a useful index for our simple update query; imagine that someone else created it for a different purpose.  Let’s run our update query again: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; We find that it now requires 4,014 logical reads and the elapsed query time has increased to around 100ms.  The extra logical reads (4 per row) are an expected consequence of maintaining the nonclustered index. The query plan is very similar to before (click to enlarge): The Clustered Index Update operator picks up the extra work of maintaining the nonclustered index. The new Compute Scalar operators detect whether the value in the some_value column has actually been changed by the update.  SQL Server may be able to skip maintaining the nonclustered index if the value hasn’t changed (see my previous post on non-updating updates for details).  Our simple query does change the value of some_data in every row, so this optimization doesn’t add any value in this specific case. The output list of columns from the Clustered Index Scan hasn’t changed from the one shown previously: SQL Server still just reads the pk and some_data columns.  Cool. Overall then, adding the nonclustered index hasn’t had any startling effects, and the LOB column data still isn’t being read from the table.  Let’s see what happens if we make the nonclustered index unique. Test 3: Simple Update with a Unique Index Here’s the script to create a new unique index, and drop the old one: CREATE UNIQUE NONCLUSTERED INDEX [UQ dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest (some_value) INCLUDE ( lob_data ) WITH ( FILLFACTOR = 100, MAXDOP = 1, SORT_IN_TEMPDB = ON ); GO DROP INDEX [IX dbo.LOBtest some_value (lob_data)] ON dbo.LOBtest; Remember that SQL Server only enforces uniqueness on index keys (the some_data column).  The lob_data column is simply stored at the leaf-level of the non-clustered index.  With that in mind, we might expect this change to make very little difference.  Let’s see: UPDATE dbo.LOBtest WITH (TABLOCKX) SET some_value = some_value - 1; Whoa!  Now look at the elapsed time and logical reads: Scan count 1, logical reads 2016, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   CPU time = 172 ms, elapsed time = 16172 ms. Even with all the data and index pages in memory, the query took over 16 seconds to update just 1,000 rows, performing over 52,000 LOB logical reads (nearly 16,000 of those using read-ahead). Why on earth is SQL Server reading LOB data in a query that only updates a single integer column? The Query Plan The query plan for test 3 looks a bit more complex than before: In fact, the bottom level is exactly the same as we saw with the non-unique index.  The top level has heaps of new stuff though, which I’ll come to in a moment. You might be expecting to find that the Clustered Index Scan is now reading the lob_data column (for some reason).  After all, we need to explain where all the LOB logical reads are coming from.  Sadly, when we look at the properties of the Clustered Index Scan, we see exactly the same as before: SQL Server is still only reading the pk and some_value columns – so what’s doing the LOB reads? Updates that Sneakily Read Data We have to go as far as the Clustered Index Update operator before we see LOB data in the output list: [Expr1020] is a bit flag added by an earlier Compute Scalar.  It is set true if the some_value column has not been changed (part of the non-updating updates optimization I mentioned earlier). The Clustered Index Update operator adds two new columns: the lob_data column, and some_value_OLD.  The some_value_OLD column, as the name suggests, is the pre-update value of the some_value column.  At this point, the clustered index has already been updated with the new value, but we haven’t touched the nonclustered index yet. An interesting observation here is that the Clustered Index Update operator can read a column into the data flow as part of its update operation.  SQL Server could have read the LOB data as part of the initial Clustered Index Scan, but that would mean carrying the data through all the operations that occur prior to the Clustered Index Update.  The server knows it will have to go back to the clustered index row to update it, so it delays reading the LOB data until then.  Sneaky! Why the LOB Data Is Needed This is all very interesting (I hope), but why is SQL Server reading the LOB data?  For that matter, why does it need to pass the pre-update value of the some_value column out of the Clustered Index Update? The answer relates to the top row of the query plan for test 3.  I’ll reproduce it here for convenience: Notice that this is a wide (per-index) update plan.  SQL Server used a narrow (per-row) update plan in test 2, where the Clustered Index Update took care of maintaining the nonclustered index too.  I’ll talk more about this difference shortly. The Split/Sort/Collapse combination is an optimization, which aims to make per-index update plans more efficient.  It does this by breaking each update into a delete/insert pair, reordering the operations, removing any redundant operations, and finally applying the net effect of all the changes to the nonclustered index. Imagine we had a unique index which currently holds three rows with the values 1, 2, and 3.  If we run a query that adds 1 to each row value, we would end up with values 2, 3, and 4.  The net effect of all the changes is the same as if we simply deleted the value 1, and added a new value 4. By applying net changes, SQL Server can also avoid false unique-key violations.  If we tried to immediately update the value 1 to a 2, it would conflict with the existing value 2 (which would soon be updated to 3 of course) and the query would fail.  You might argue that SQL Server could avoid the uniqueness violation by starting with the highest value (3) and working down.  That’s fine, but it’s not possible to generalize this logic to work with every possible update query. SQL Server has to use a wide update plan if it sees any risk of false uniqueness violations.  It’s worth noting that the logic SQL Server uses to detect whether these violations are possible has definite limits.  As a result, you will often receive a wide update plan, even when you can see that no violations are possible. Another benefit of this optimization is that it includes a sort on the index key as part of its work.  Processing the index changes in index key order promotes sequential I/O against the nonclustered index. A side-effect of all this is that the net changes might include one or more inserts.  In order to insert a new row in the index, SQL Server obviously needs all the columns – the key column and the included LOB column.  This is the reason SQL Server reads the LOB data as part of the Clustered Index Update. In addition, the some_value_OLD column is required by the Split operator (it turns updates into delete/insert pairs).  In order to generate the correct index key delete operation, it needs the old key value. The irony is that in this case the Split/Sort/Collapse optimization is anything but.  Reading all that LOB data is extremely expensive, so it is sad that the current version of SQL Server has no way to avoid it. Finally, for completeness, I should mention that the Filter operator is there to filter out the non-updating updates. Beating the Set-Based Update with a Cursor One situation where SQL Server can see that false unique-key violations aren’t possible is where it can guarantee that only one row is being updated.  Armed with this knowledge, we can write a cursor (or the WHILE-loop equivalent) that updates one row at a time, and so avoids reading the LOB data: SET NOCOUNT ON; SET STATISTICS XML, IO, TIME OFF;   DECLARE @PK INTEGER, @StartTime DATETIME; SET @StartTime = GETUTCDATE();   DECLARE curUpdate CURSOR LOCAL FORWARD_ONLY KEYSET SCROLL_LOCKS FOR SELECT L.pk FROM LOBtest L ORDER BY L.pk ASC;   OPEN curUpdate;   WHILE (1 = 1) BEGIN FETCH NEXT FROM curUpdate INTO @PK;   IF @@FETCH_STATUS = -1 BREAK; IF @@FETCH_STATUS = -2 CONTINUE;   UPDATE dbo.LOBtest SET some_value = some_value - 1 WHERE CURRENT OF curUpdate; END;   CLOSE curUpdate; DEALLOCATE curUpdate;   SELECT DATEDIFF(MILLISECOND, @StartTime, GETUTCDATE()); That completes the update in 1280 milliseconds (remember test 3 took over 16 seconds!) I used the WHERE CURRENT OF syntax there and a KEYSET cursor, just for the fun of it.  One could just as well use a WHERE clause that specified the primary key value instead. Clustered Indexes A clustered index is the ultimate index with included columns: all non-key columns are included columns in a clustered index.  Let’s re-create the test table and data with an updatable primary key, and without any non-clustered indexes: IF OBJECT_ID(N'dbo.LOBtest', N'U') IS NOT NULL DROP TABLE dbo.LOBtest; GO CREATE TABLE dbo.LOBtest ( pk INTEGER NOT NULL, some_value INTEGER NULL, lob_data VARCHAR(MAX) NULL, another_column CHAR(5) NULL, CONSTRAINT [PK dbo.LOBtest pk] PRIMARY KEY CLUSTERED (pk ASC) ); GO DECLARE @Data VARCHAR(MAX); SET @Data = REPLICATE(CONVERT(VARCHAR(MAX), 'x'), 65540);   WITH Numbers (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2 ) INSERT LOBtest WITH (TABLOCKX) ( pk, some_value, lob_data ) SELECT TOP (1000) N.n, N.n, @Data FROM Numbers N WHERE N.n <= 1000; Now here’s a query to modify the cluster keys: UPDATE dbo.LOBtest SET pk = pk + 1; The query plan is: As you can see, the Split/Sort/Collapse optimization is present, and we also gain an Eager Table Spool, for Halloween protection.  In addition, SQL Server now has no choice but to read the LOB data in the Clustered Index Scan: The performance is not great, as you might expect (even though there is no non-clustered index to maintain): Table 'LOBtest'. Scan count 1, logical reads 2011, physical reads 0, read-ahead reads 0, lob logical reads 36015, lob physical reads 0, lob read-ahead reads 15992.   Table 'Worktable'. Scan count 1, logical reads 2040, physical reads 0, read-ahead reads 0, lob logical reads 34000, lob physical reads 0, lob read-ahead reads 8000.   SQL Server Execution Times: CPU time = 483 ms, elapsed time = 17884 ms. Notice how the LOB data is read twice: once from the Clustered Index Scan, and again from the work table in tempdb used by the Eager Spool. If you try the same test with a non-unique clustered index (rather than a primary key), you’ll get a much more efficient plan that just passes the cluster key (including uniqueifier) around (no LOB data or other non-key columns): A unique non-clustered index (on a heap) works well too: Both those queries complete in a few tens of milliseconds, with no LOB reads, and just a few thousand logical reads.  (In fact the heap is rather more efficient). There are lots more fun combinations to try that I don’t have space for here. Final Thoughts The behaviour shown in this post is not limited to LOB data by any means.  If the conditions are met, any unique index that has included columns can produce similar behaviour – something to bear in mind when adding large INCLUDE columns to achieve covering queries, perhaps. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Microsoft Sponsored - Give Camp

    - by Ken Lovely, MCSE, MCDBA, MCTS
    Are you ready to connect with the local tech community for a good cause? GiveCamp needs your support. For one weekend in June, we’ll take on the technology wish lists of 20 non-profit organizations, and we’re looking for about 100 volunteers, both technical and non-technical, to help us do it. A typical GiveCamp draws 75 to 100 volunteers. Individuals can work with their colleagues in company teams, or they can opt to be matched with fellow volunteers who have complementary skill sets. Everyone is welcome to head home for the evenings – but there are always the diehards who work from Friday kickoff straight through Sunday afternoon. Food and drinks, especially of the caffeinated variety, are provided, along with game systems for breaks. Technical volunteers We're looking for graphic or UX designers, developers with .NET/Java/LAMP/Open Source/CMS experience, project managers, system/network administrators, DBAs, and non-profit technical consultants and web strategists. Non-technical volunteers Beyond the technology, there are many other aspects that make GiveCamp a success. We need non-technical volunteers to run errands, help with setting up and cleaning up, and everything in between. Whether you can offer a couple hours of your time or join GiveCamp for a couple days, your support is needed Sign up at; http://www.eventbrite.com/event/650615007 Feel free to contact me or Dani Diaz of Microsoft for more information

    Read the article

  • Data Masking for Oracle E-Business Suite

    - by Troy Kitch
    E-Business Suite customers can now use Oracle Data Masking to obscure sensitive information in non-production environments. Many organizations are inadvertently exposed when copying sensitive or regulated production data into non-production database environments for development, quality assurance or outsourcing purposes. Due to weak security controls and unmonitored access, these non-production environments have increasingly become the target of cyber criminals. Learn more about the announcement here.

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Philly GiveCamp 2010

    - by wulfers
    Spent the weekend helping out several non-profits doing what we like to do best...  Designing, developing and making people very happy with their new websites, systems, applications and features.  Form what I saw at this GiveCamp about 75 percent of the non-profits needed updated or new websites supporting CMS features and the ability for staff members to update the content on their websites.... Some cool apps were designed and developed..... A centralized system for distribution of daily schedules and task assistance to autistic clients was a show stopper with an awesome central management interface, IPhone and in the future windows phone support for tactile, auditory and visual cues. SharePoint was upgraded and forms were automated for a volunteer fire company that desperately needed some automation to help the fireman do their primary job. Many cool sites for non-profits that had ether an outdated or non existent web presence.

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • BI&EPM in Focus December 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Share with your customers: October Edition of Business Analytics Customer Newsletter (link) Oracle OpenWorld Presentation pdf's available for download (link) OOW Mark Hurd Recap: Business Analytics at Oracle OpenWorld (video | blog) Register your customers for Oracle Days 2012 (link | video) BI & EPM Business Analytics Advisor Webcasts on My.Oracle.Support - Current Schedule and Archived (link) Customers Wüstenrot Efficiently Generates Reports and Analyzes Data with Enterprise Reporting Solution Empresas Públicas Medellin Gathers Data for Annual, Financial Projections 70% Faster ICON Improves Month-End Reporting Significantly Using a Single Source for Timely Consistent Business Intelligence, Reduces Reliance on Spreadsheets  Gilead Sciences, a science-led company backed by business-led IT, uses Oracle solutions to simplify business processes and establish a foundation for continued growth Dell Enhances the Customer Experience with Oracle’s RTD (video) Link to Complete Archive Enterprise Performance Management eBook: Transforming Enterprise Business Planning (link) Blog: Why CFO's should care about Big Data (link) Oracle Hyperion Project Financial Planning - New Projects Feature Release 11.1.2.2 Video Feature Overview. Now Available with many other Hyperion overviews on the YouTube Oracle EPM Channel (link) Available Patch Sets and Patch Set Updates for Oracle Hyperion Enterprise Performance Management Products on My.Oracle.Support (link) Hyperion Disclosure Management supplementary materials provides a set of guides for Disclosure Management and Taxonomy Designer users, including best practices guidelines, a full Disclosure Management sample report, a webinar series, and other guiding materials on My.Oracle.Support (link) See the selection of EPM Customer Videos at MediaNetwork (Hyperion) Business Intelligence Webcast Replay: Big Data, Bright Future, featuring Andrew McAfee (link) Webinar series and guides on Getting Started With Hyperion Interactive Reporting Translation Workbench, a tool that accelerates metadata conversion from IR to Oracle BI Enterprise Edition (OBIEE) on My.Oracle.Support (link) See the selection of BI Customer Videos at MediaNetwork (BI) and for (Exalytics) and (Endeca) ORACLE TEAM USA Analytics Dashboard demo - Now Available (link)

    Read the article

  • What is your strategy for converting RC builds into retail?

    - by Matthew PK
    We're trying to implement a strategy for how we transition our builds from RC to released retail code. When we label a build as a release candidate, we send it to QA for regression. If they approve it, that RC then becomes our released retail code. I liked the idea of "obvious" labeling of versions so that a user knows whether they have a beta or an RC or retail code... where you would have some obvious watermark in non-retail code (think Windows 7 where the RC or non-genuine builds watermark in the bottom right). ... but it seemed strange to us to manipulate the project (to remove the watermark) once it passed regression. If QA certified version a.b.c.d then our retail code should be that same version, not a.b.c.d+1 what strategies have you employed to clearly label non-release software versions without incrementing your build to disable the watermarks in your retail code? One idea I've considered is writing your build to look for a signed file in the installer archive... non-release code wouldn't include this file and so the app would know to display a watermark. But even this seems like QA is then working with non-release code. Ideas?

    Read the article

  • How can I determine which GPU card is running at PCI Express 2.0 x16 & which is using x8?

    - by M. Tibbits
    Is there a way to determine the speed of the PCI Express connection to a specific card? I have three cards plugged in: two Nvidia GTX 480's (one at x16 & and one at x8) one Nvidia GTX 460 running at x8 Is there some way, either by a function call in C or an option to lspci that I can determine the bus speed of the graphics cards? When I only use one of the cards for my CUDA program, I'd like to use the one which is running at x16. Thanks! Note: lspci -vvv dumps out For the two GTX 480s. I don't see any differences that pertain to bus speed. 03:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at df00 [disabled] [size=128] [virtual] Expansion ROM at b8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 03:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] Capabilities: <access denied> 04:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at cf00 [size=128] [virtual] Expansion ROM at c8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 04:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin B routed to IRQ 5 Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> And the only differences I see relate specifically to the memory mapping: myComputer:~> diff card1 card2 3c3 < Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 7,11c7,11 < Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] < Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] < Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] < Region 5: I/O ports at df00 [disabled] [size=128] < [virtual] Expansion ROM at b8000000 [disabled] [size=512K] --- > Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] > Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] > Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] > Region 5: I/O ports at cf00 [size=128] > [virtual] Expansion ROM at c8000000 [disabled] [size=512K] 18c18 < Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 19a20 > Latency: 0, Cache Line Size: 64 bytes 21c22 < Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] --- > Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K]

    Read the article

  • Does this licensing clause allow redistribution of this application?

    - by George Edison
    As a software developer, I find a frequent need to create icons for my applications. Nothing has ever worked as well as IcoFX for this purpose. Unfortunately, the program is no longer free - but I still have the installer for an older version. My question is whether or not I can distribute copies of the installer. The license agreement contains the following pertinent clauses: 6. All redistributions of the Software's files must retain all copyright notices and web site addresses that are currently in place, and must include this list of conditions without modification. 7. None of the Software's files may be redistributed for profit or as part of another Software package without express written permission of the Author. 10. The Author reserves his rights to modify this agreement in the future. The first two clauses would seem to suggest that I can legally distribute verbatim copies of the installer but the last clause has me confused. If the author modifies the agreement and removes the ability to distribute copies, does it apply to my copy that I downloaded a while back?

    Read the article

  • Not able to apt-get update from terminal, what to do now?

    - by Utkarsh
    Whenever I try to update from terminal, I get this error: root@Utkarsh[utkarsh]#apt-get update Hit http://packages.bosslinux.in anokha Release.gpg Hit http://packages.bosslinux.in anokha Release Hit http://packages.bosslinux.in anokha/contrib Sources Hit http://packages.bosslinux.in anokha/non-free Sources Hit http://packages.bosslinux.in anokha/main Sources Hit http://packages.bosslinux.in anokha/contrib i386 Packages Hit http://packages.bosslinux.in anokha/non-free i386 Packages Hit http://packages.bosslinux.in anokha/main i386 Packages Ign http://packages.bosslinux.in anokha/contrib Translation-en_US Ign http://packages.bosslinux.in anokha/contrib Translation-en Ign http://packages.bosslinux.in anokha/main Translation-en_US Ign http://packages.bosslinux.in anokha/main Translation-en Ign http://packages.bosslinux.in anokha/non-free Translation-en_US Ign http://packages.bosslinux.in anokha/non-free Translation-en Reading package lists... Done W: Duplicate sources.list entry http://packages.bosslinux.in/boss/ anokha/main i386 Packages (/var/lib/apt/lists/packages.bosslinux.in_boss_dists_anokha_main_binary-i386_Packages) W: Duplicate sources.list entry http://packages.bosslinux.in/boss/ anokha/contrib i386 Packages (/var/lib/apt/lists/packages.bosslinux.in_boss_dists_anokha_contrib_binary-i386_Packages) W: Duplicate sources.list entry http://packages.bosslinux.in/boss/ anokha/non-free i386 Packages (/var/lib/apt/lists/packages.bosslinux.in_boss_dists_anokha_non-free_binary-i386_Packages) W: You may want to run apt-get update to correct these problems

    Read the article

  • New Java ME security app, Rapid Tracker, is now full version

    - by hinkmond
    Rapid Protect has updated it's Java ME security app to be the full version now instead of a dumbed down version that ran on feature phones. Now, that's progress! See: Full Rapid Tracker on Java ME Here's a quote: Rapid Protect, a leading company focused on mobile based safety, security and collaboration space announces major feature enhancements to its award winning "Rapid Tracker" mobile applications. In addition to many new features, it announced availability of Full Rapid Tracker application on J2ME non-smart feature phones. Hmmm... "on J2ME non-smart feature phones". I wonder if by "non-smart" they mean another word... Perhaps, "non-iDrone-Anphoid"? Hinkmond

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >