Search Results

Search found 6028 results on 242 pages for 'total commander'.

Page 183/242 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • Easy Made Easier - Networking

    - by dragonfly
        In my last post, I highlighted the feature of the Appliance Manager Configurator to auto-fill some fields based on previous field values, including host names based on System Name and sequential IP addresses from the first IP address entered. This can make configuration a little faster and a little less subject to data entry errors, particularly if you are doing the configuration on the Oracle Database Appliance itself.     The Oracle Database Appliance Appliance Manager Configurator is available for download here. But why would you download it, if it comes pre-installed on the Oracle Database Appliance? A common reason for customers interested in this new Engineered System is to get a good idea of how easy it is to configure. Beyond that, you can save the resulting configuration as a file, and use it on an Oracle Database Appliance. This allows you to verify the data entered in advance, and in the comfort of your office. In addition, the topic of this post is another strong reason to download and use the Appliance Manager Configurator prior to deploying your Oracle Database Appliance.     The most common source of hiccups in deploying an Oracle Database Appliance, based on my experiences with a variety of customers, involves the network configuration. It is during Step 11, when network validation occurs, that these come to light, which is almost half way through the 24 total steps, and can be frustrating, whether it was a typo, DNS mis-configuration or IP address already in use. This is why I recommend as a best practice taking advantage of the Appliance Manager Configurator prior to deploying an Oracle Database Appliance.     Why? Not only do you get the benefit of being able to double check your entries before you even start on the Oracle Database Appliance, you can also take advantage of the Network Validation step. This is the final step before you review all the data and can save it to a text file. It can be skipped, if you aren't ready or are not connected to the network that the Oracle Database Appliance will be on. My recommendation, though, is to run the Appliance Manager Configurator on your laptop, enter the data or re-load a previously saved file of the data, and then connect to the network that the Oracle Database Appliance will be on. Now run the Network Validation. It will check to make sure that the host names you entered are in DNS and do resolve to the IP addresses you specifiied. It will also ping the IP Addresses you specified, so that you can verify that no other machine is already using them (yes, that has happened at customer sites).     After you have completed the validation, as seen in the screen shot below, you can review the results and move on to saving your settings to a file for use on your Oracle Database Appliance, or if there are errors, you can use the Back button to return to the appropriate screen and correct the data. Once you are satisfied with the Network Validation, just check the Skip/Ignore Network Validation checkbox at the top of the screen, then click Next. Is the Network Validation in the Appliance Manager Configurator required? No, but it can save you time later. I should also note that the Network Validation screen is not part of the Appliance Manager Configurator that currently ships on the Oracle Database Appliance, so this is the easiest way to verify your network configuration.     I hope you are finding this series of posts useful. My next post will cover some aspects of the windowing environment that gets run by the 'startx' command on the Oracle Database Appliance, since this is needed to run the Appliance Manager Configurator via a direct connected monitor, keyboard and mouse, or via the ILOM. If it's been a while since you've used an OpenWindows environment, you'll want to check it out.

    Read the article

  • How to make Ubuntu recognize an unknown external display (so I can adjust its resolution)?

    - by WagnerAA
    I have a Dell laptop with an external monitor attached (a Samsumg SyncMaster 931c). My laptop display was recognized, and I can adjust its optimum resolution. My external display is still unknown, thus I'm stuck at a lower resolution (1024x768): I tried the "Detect Displays" button, but it didn't work, nothing happens. I recently upgraded from Ubuntu 12.04 to 12.10. Things were working before. I don't know if I can actually change this configuration, or if this is a bug. I searched for an answer here and also in Launchpad's website, but found none. I even tried to install Nvidia drivers, and just messed things up. It seems I wasn't even using nvidia before, as I guessed by looking at my additional drivers configuration: My laptop has an Intel chipset, I guess: $ dpkg --get-selections | grep -i -e nvidia -e intel intel-gpu-tools install libdrm-intel1:amd64 install libdrm-intel1:i386 install nvidia-common install xserver-xorg-video-intel install I don't have an xorg.conf file (I think this is nvidia related, am I right?): $ cat /etc/X11/xorg.conf cat: /etc/X11/xorg.conf: No such file or directory $ ls -l /etc/X11/ total 76 drwxr-xr-x 2 root root 4096 Out 19 23:41 app-defaults drwxr-xr-x 2 root root 4096 Abr 25 2012 cursors -rw-r--r-- 1 root root 18 Abr 25 2012 default-display-manager drwxr-xr-x 4 root root 4096 Abr 25 2012 fonts -rw-r--r-- 1 root root 17394 Dez 3 2009 rgb.txt lrwxrwxrwx 1 root root 13 Mai 1 03:33 X -> /usr/bin/Xorg drwxr-xr-x 3 root root 4096 Out 19 23:41 xinit drwxr-xr-x 2 root root 4096 Jan 23 2012 xkb -rw-r--r-- 1 root root 0 Out 24 08:55 xorg.conf.nvidia-xconfig-original -rwxr-xr-x 1 root root 709 Abr 1 2010 Xreset drwxr-xr-x 2 root root 4096 Out 19 10:08 Xreset.d drwxr-xr-x 2 root root 4096 Out 19 10:08 Xresources -rwxr-xr-x 1 root root 3730 Jan 20 2012 Xsession drwxr-xr-x 2 root root 4096 Out 20 00:11 Xsession.d -rw-r--r-- 1 root root 265 Jul 1 2008 Xsession.options -rw-r--r-- 1 root root 13 Ago 15 06:43 XvMCConfig -rw-r--r-- 1 root root 601 Abr 25 2012 Xwrapper.config Here is some information I gathered by looking at other related posts: $ sudo lshw -C display; lsb_release -a; uname -a *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:48 memory:f6800000-f6bfffff memory:d0000000-dfffffff ioport:1800(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:f6100000-f61fffff LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:core-4.0-amd64:core-4.0-noarch:cxx-3.0-amd64:cxx-3.0-noarch:cxx-3.1-amd64:cxx-3.1-noarch:cxx-3.2-amd64:cxx-3.2-noarch:cxx-4.0-amd64:cxx-4.0-noarch:desktop-3.1-amd64:desktop-3.1-noarch:desktop-3.2-amd64:desktop-3.2-noarch:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.0-amd64:graphics-3.0-noarch:graphics-3.1-amd64:graphics-3.1-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-3.2-amd64:printing-3.2-noarch:printing-4.0-amd64:printing-4.0-noarch:qt4-3.1-amd64:qt4-3.1-noarch Distributor ID: Ubuntu Description: Ubuntu 12.10 Release: 12.10 Codename: quantal Linux Batcave 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:31:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ xrandr -q Screen 0: minimum 320 x 200, current 2304 x 800, maximum 32767 x 32767 LVDS1 connected 1280x800+0+0 (normal left inverted right x axis y axis) 286mm x 1790mm 1280x800 59.9*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 connected 1024x768+1280+32 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 DP1 disconnected (normal left inverted right x axis y axis) If there's anything else I can do, any other information I can post here, to help me configure this external display, please let me know. If this is actually a bug, I apologize (I know bugs are not allowed here), but I really wasn't sure. And I will promptly file a bug report in Launchpad if that's the case. Thanks a lot in advance. ;)

    Read the article

  • User password rejected on login screen but accepted on text console login

    - by MadsirR
    I had to force shutdown my Ubuntu 12.04 64-bit, after which I restarted and tried to log in as a normal user, which was rejected several times. I then logged in as guest and tty to my regular account with use of my normal password, which succeeded. (So the password is still valid.) How can I gain access again via the normal login procedure (welcome screen)? Update: When I tried to log on with my new password, it again was denied. When I deliberately tried to log on with a faulty password, an error message came back, saying: Access denied - wrong password. I suppose, the first time the password was not rejected, but the procedure was aborted for some reason. Some additional info after trying to find a solution: I am conviced it is a Compiz-issue. Why? before this happened, all sessions came to a grinding halt, regardless of being logged on in a 2D or 3d environment. I found a link saying that I should remove Compiz and proceed in a 2D environment, which initiall worked without a glitch, until my system went into a state of total obivium. Only after that, the above mentioned troubles appeared. In the meantime I have happened to find a thread with reference 17381, describing exactly what I have experienced. For now, I will try to cure this situation (later this week) and revert with the results, hopefully to close this post. In the meantime I cordially thank you all, even if you didn't kill the problem; you gave me the inspiration to look further and find a possible cure. Update2: After 15 hrs of trial-and-error I callled it quits (When I decided to tackle this problem, I've given myself 12 hrs, to avoid massive loss of time.) I decided to re-install Precise, since the "point 1" version has become availabe. Log-in is back to normal, as is the graphic environment. Response to mouse input is stil appalling, especially when I have a series of screens open as "children"of a "parent" screen. It still completely locks up. I have installed Enlightenment, Gnome classic, Gnome 3, Cinnamon and they all behave in a similar fashion. FOR THOSE WHO NEED A WAY-OUT IN SITUATIONS OF THE LIKE: Open a terminal with [Ctrl+Alt+F2]. Type [sudo killall Firefox] (or whatever application you wish to terminate). Key in your password. Return to your graphical screen with [Ctrl+Alt+F7], and Bob's your uncle. Just re-open Firefox like nothing happened. Next time you are stuck: [Ctrl+Alt+F2], upward arrow till you meet the command of your desire, [Ctrl+Alt+F7], etcetera. Hope this is of help. My next move will be to upgrade the kernel to 3.4 from the repositories for 12.10. However, since this entails a totally new situation, I will start a new thread on this site to avoid topic pollution I will keep you posted. Still.

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • Animation Trouble with Java Swing Timer - Also, JFrame Will Not Exit_On_Close

    - by forgotton_semicolon
    So, I am using a Java Swing Timer because putting the animation code in a run() method of a Thread subclass caused an insane amount of flickering that is really a terrible experience for any video game player. Can anyone give me any tips on: Why there is no animation... Why the JFrame will not close when it is coded to Exit_On_Close 2 times My code is here: import java.awt.; import java.awt.event.; import javax.swing.*; import java.net.URL; //////////////////////////////////////////////////////////////// TFQ public class TFQ extends JFrame { DrawingsInSpace dis; //========================================================== constructor public TFQ() { dis = new DrawingsInSpace(); JPanel content = new JPanel(); content.setLayout(new FlowLayout()); this.setContentPane(dis); this.setDefaultCloseOperation(EXIT_ON_CLOSE); this.setTitle("Plasma_Orbs_Off_Orion"); this.setSize(500,500); this.pack(); //... Create timer which calls action listener every second.. // Use full package qualification for javax.swing.Timer // to avoid potential conflicts with java.util.Timer. javax.swing.Timer t = new javax.swing.Timer(500, new TimePhaseListener()); t.start(); } /////////////////////////////////////////////// inner class Listener thing class TimePhaseListener implements ActionListener, KeyListener { // counter int total; // loop control boolean Its_a_go = true; //position of our matrix int tf = -400; //sprite directions int Sprite_Direction; final int RIGHT = 1; final int LEFT = 2; //for obstacle Rectangle mega_obstacle = new Rectangle(200, 0, 20, HEIGHT); public void actionPerformed(ActionEvent e) { //... Whenever this is called, repaint the screen dis.repaint(); addKeyListener(this); while (Its_a_go) { try { dis.repaint(); if(Sprite_Direction == RIGHT) { dis.matrix.x += 2; } // end if i think if(Sprite_Direction == LEFT) { dis.matrix.x -= 2; } } catch(Exception ex) { System.out.println(ex); } } // end while i think } // end actionPerformed @Override public void keyPressed(KeyEvent arg0) { // TODO Auto-generated method stub } @Override public void keyReleased(KeyEvent arg0) { // TODO Auto-generated method stub } @Override public void keyTyped(KeyEvent event) { // TODO Auto-generated method stub if (event.getKeyChar()=='f'){ Sprite_Direction = RIGHT; System.out.println("matrix should be animating now "); System.out.println("current matrix position = " + dis.matrix.x); } if (event.getKeyChar()=='d') { Sprite_Direction = LEFT; System.out.println("matrix should be going in reverse"); System.out.println("current matrix position = " + dis.matrix.x); } } } //================================================================= main public static void main(String[] args) { JFrame SafetyPins = new TFQ(); SafetyPins.setVisible(true); SafetyPins.setSize(500,500); SafetyPins.setResizable(true); SafetyPins.setLocationRelativeTo(null); SafetyPins.setDefaultCloseOperation(EXIT_ON_CLOSE); } } class DrawingsInSpace extends JPanel { URL url1_plasma_orbs; URL url2_matrix; Image img1_plasma_orbs; Image img2_matrix; // for the plasma_orbs Rectangle bbb = new Rectangle(0,0, 0, 0); // for the matrix Rectangle matrix = new Rectangle(-400, 60, 430, 200); public DrawingsInSpace() { //load URLs try { url1_plasma_orbs = this.getClass().getResource("plasma_orbs.png"); url2_matrix = this.getClass().getResource("matrix.png"); } catch(Exception e) { System.out.println(e); } // attach the URLs to the images img1_plasma_orbs = Toolkit.getDefaultToolkit().getImage(url1_plasma_orbs); img2_matrix = Toolkit.getDefaultToolkit().getImage(url2_matrix); } public void paintComponent(Graphics g) { super.paintComponent(g); // draw the plasma_orbs g.drawImage(img1_plasma_orbs, bbb.x, bbb.y,this); //draw the matrix g.drawImage(img2_matrix, matrix.x, matrix.y, this); } } // end class enter code here

    Read the article

  • Capgemini Global Business Process Management Report

    - by JuergenKress
    Welcome to the Capgemini Global Business Process Management (BPM) Report. This report is an exploration of key trends in BPM as seen by CXOs across a broad selection of sectors and geographies. BPM is perhaps at a tipping point - it’s certainly at an exciting stage in its evolution. As both an engineer and an Operational Research practitioner in my early career, and subsequently as a consultant, I have seen BPM through its development over the last 26 years. BPM has its roots in management practices such as Total Quality Management, Business Process Reengineering & Model Based Development; but the advent of the new generation of sophisticated modelling and process execution technologies has greatly enhanced BPM’s power to truly transform businesses. This has created one of the most rapidly growing and attractive market sectors for both services and technology. We see BPM as a critical management discipline that when executed against clear, cross organizational business objectives, can deliver exceptional value to that organization. However, we also see that the potential for BPM is not well understood. Our decision to conduct this global survey stemmed from discussions with our clients. We sought to gain a better impression of their understanding of BPM, how they measure its value, and how far it is prioritized within their Business and Technology Transformation efforts. This research confirms our belief that BPM needs to be a jointly owned Business and IT discipline. It also demonstrates that it is starting to gain significant traction in the market and investments are starting to pay dividends to the early adopters. At Capgemini we are being asked by our clients to help them simplify and improve their business models and the technology that supports them and we are already seeing BPM become an integral and key part of this proposition. Business Process Management is becoming ever more relevant to both large and small organizations in the current economic climate. At a time when many different market sectors are facing slow revenue growth, customer churn and increased pressures on costs, BPM becomes a critical weapon in the battle for efficiency and effectiveness in processes. Furthermore, in a challenging and changing business environment that is characterized by uncertainty, it allows organizations to adapt, be more agile and fleet of foot. Capgemini is seeing strong demand for BPM services in markets such as the USA, the UK, the Netherlands and France; and there are clear signs of increased interest in other geographies such as, Germany, Sweden, Spain, Italy and Australia. In sector terms, the financial services industry has led the way in BPM adoption over the recent past, driven by increased focus on customer- centricity and regulatory compliance. Other sectors, public sector, utilities, telco, retail and manufacturing are now not only catching up, but are starting o use BPM in new ways to create new business models to serve customers and outsmart the competition. The research findings also show however that this is a complex landscape, and we are not seeing adoption of BPM in a clear and consistent way. This report also looks at some of the barriers to adoption, with organizational silos being a major obstacle. Waters are further muddied by fragmented budgets, lack of clear governance and ownership and internal politics. The objective of our investment in this research project was to shed some light on these elements with a view to assisting organizations to create strategies that avoid or at least mitigate some of these barriers to success. Management of change in such endea vours is a key part in enabling the appropriate alignment of business and technology to support their transformation efforts. I hope that you find this report of benefit in the further adoption of Business Process Management. Get the full report here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Capgemini,bpm report,bpm market,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • SQL Saturday and Exploring Data Privacy

    - by Johnm
    I have been highly impressed with the growth of the SQL Saturday phenomenon. It seems that an announcement for a new wonderful event finds its way to my inbox on a daily basis. I have had the opportunity to attend the first of the SQL Saturday's for Tampa, Chicago, Louisville and recently my home town of Indianapolis. It is my hope that there will be many more in my future. This past weekend I had the honor of being selected to speak amid a great line up of speakers at SQL Saturday #82 in Indianapolis. My session topic/title was "Exploring Data Privacy". Below is a brief synopsis of my session: Data Privacy in a Nutshell        - Definition of data privacy        - Examples of personally identifiable data        - Examples of Sensitive data Laws and Stuff        - Various examples of laws, regulations and policies that influence the definition of data privacy        - General rules of thumb that encompasses most laws Your Data Footprint        - Who has personal information about you?        - What are you exchanging data privacy for?        - The amazing resilience of data        - The cost of data loss Weapons of Mass Protection       - Data classification       - Extended properties       - Database Object Schemas       - An extraordinarily brief introduction of encryption       - The amazing data professional  <-the most important point of the entire session! The subject of data privacy is one that is quickly making its way to the forefront of the mind of many data professionals. Somewhere out there someone is storing personally identifiable and other sensitive data about you. In some cases it is kept reasonably secure. In other cases it is kept in total exposure without the consideration of its potential of damage to you. Who has access to it and how is it being used? Are we being unnecessarily required to supply sensitive data in exchange for products and services? These are just a few questions on everyone's mind. As data loss events of grand scale hit the headlines in a more frequent succession, the level of frustration and urgency for a solution increases. I assembled this session with the intent to raise awareness of sensitive data and remind us all that we, data professionals, are the ones who have the greatest impact and influence on how sensitive data is regarded and protected. Mahatma Gandhi once said "Be the change you want to see in the world." This is guidance that I keep near to my heart as I approached this topic of data privacy.

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 2/2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} In my earlier blog post, I had described the factors that lead to compliance complexity of financial services processes. In this post, I will outline the business implications of the increasing process compliance complexity and the specific role of BPM in addressing the operational risk reduction objectives of regulatory compliance. First, let’s look at the business implications of increasing complexity of process compliance for financial institutions: · Increased time and cost of compliance due to duplication of effort in conforming to regulatory requirements due to process changes driven by evolving regulatory mandates, shifting business priorities or internal/external audit requirements · Delays in audit reporting due to quality issues in reconciling non-standard process KPIs and integrity concerns arising from the need to rely on multiple data sources for a given process Next, let’s consider some approaches to managing the operational risk of business processes. Financial institutions considering reducing operational risk of their processes, generally speaking, have two choices: · Rip-and-replace existing applications with new off-the shelf applications. · Extend capabilities of existing applications by modeling their data and process interactions, with other applications or user-channels, outside of the application boundary using BPM. The benefit of the first approach is that compliance with new regulatory requirements would be embedded within the boundaries of these applications. However pre-built compliance of any packaged application or custom-built application should not be mistaken as a one-shot fix for future compliance needs. The reason is that business needs and regulatory requirements inevitably out grow end-to-end capabilities of even the most comprehensive packaged or custom-built business application. Thus, processes that originally resided within the application will eventually spill outside the application boundary. It is precisely at such hand-offs between applications or between overlaying processes where vulnerabilities arise to unknown and accidental faults that potentially result in errors and lead to partial or total failure. The gist of the above argument is that processes which reside outside application boundaries, in other words, span multiple applications constitute a latent operational risk that spans the end-to-end value chain. For instance, distortion of data flowing from an account-opening application to a credit-rating system if left un-checked renders compliance with “KYC” policies void even when the “KYC” checklist was enforced at the time of data capture by the account-opening application. Oracle Business Process Management is enabling financial institutions to lower operational risk of such process ”gaps” for Financial Services processes including “Customer On-boarding”, “Quote-to-Contract”, “Deposit/Loan Origination”, “Trade Exceptions”, “Interest Claim Tracking” etc.. If you are faced with a similar challenge and need any guidance on the same feel free to drop me a note.

    Read the article

  • E-Business Suite : Role of CHUNK_SIZE in Oracle Payroll

    - by Giri Mandalika
    Different batch processes in Oracle Payroll flow have the ability to spawn multiple child processes (or threads) to complete the work in hand. The number of child processes to fork is controlled by the THREADS parameter in APPS.PAY_ACTION_PARAMETERS view. THREADS parameter The default value for THREADS parameter is 1, which is fine for a single-processor system but not optimal for the modern multi-core multi-processor systems. Setting the THREADS parameter to a value equal to or less than the total number of [virtual] processors available on the system may improve the performance of payroll processing. However on the down side, since multiple child processes operate against the same set of payroll tables in HR schema, database may experience undesired consequences such as buffer busy waits and index contention, which results in giving up some of the gains achieved by using multiple child processes/threads to process the work. Couple of other action parameters, CHUNK_SIZE and CHUNK_SHUFFLE, help alleviate the database contention. eg., Set a value for THREADS parameter as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = DESIRED_VALUE WHERE PARAMETER_NAME = 'THREADS'; COMMIT; (I am not aware of any maximum value for THREADS parameter) CHUNK_SIZE parameter The size of each commit unit for the batch process is controlled by the CHUNK_SIZE action parameter. In other words, chunking is the act of splitting the assignment actions into commit groups of desired size represented by the CHUNK_SIZE parameter. The default value is 20, and each thread processes one chunk at a time -- which means each child process inserts or processes 20 assignment actions at any time. When multiple threads are configured, each thread picks up a chunk to process, completes the assignment actions and then picks up another chunk. This is repeated until all the chunks are exhausted. It is possible to use different chunk sizes in different batch processes. During the initial phase of processing, CHUNK_SIZE number of assignment actions are inserted into relevant table(s). When multiple child processes are inserting data at the same time into the same set of tables, as explained earlier, database may experience contention. The default value of 20 is mostly optimal in such a case. Experiment with different values for the initial phase by +/-10 for CHUNK_SIZE parameter and observe the performance impact. A larger value may make sense during the main processing phase. Again experimentation is the key in finding the suitable value for your environment. Start with a large value such as 2000 for the chunk size, then increment or decrement the size by 500 at a time until an optimal value is found. eg., Set a value for CHUNK_SIZE parameter as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = DESIRED_VALUE WHERE PARAMETER_NAME = 'CHUNK_SIZE'; COMMIT; CHUNK_SIZE action parameter accepts a value that is as low as 1 or as high as 16000. CHUNK SHUFFLE parameter By default, chunks of assignment actions are processed sequentially by all threads - which may not be a good thing especially given that all child processes/threads performing similar actions against the same set of tables almost at the same time. By saying not a good thing, I mean to say that the default behavior leads to contention in the database (in data blocks, for example). It is possible to relieve some of that database contention by randomizing the processing order of chunks of assignment actions. This behavior is controlled by the CHUNK SHUFFLE action parameter. Chunk processing is not randomized unless explicitly configured. eg., Set chunk shuffling as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = 'Y' WHERE PARAMETER_NAME = 'CHUNK SHUFFLE'; COMMIT; Finally I recommend checking the following document out for additional details and additional pay action tunable parameters that may speed up the processing of Oracle Payroll.     My Oracle Support Doc ID: 226987.1 Oracle 11i & R12 Human Resources (HRMS) & Benefits (BEN) Tuning & System Health Checks Also experiment with different combinations of parameters and values until the right set of action parameters and values are found for your deployment.

    Read the article

  • Minimum team development sizes

    - by MarkPearl
    Disclaimer - these are observations that I have had, I am not sure if this follows the philosophy of scrum, agile or whatever, but most of these insights were gained while implementing a scrum scenario. Two is a partnership, three starts a team For a while I thought that a team was anything more than one and that scrum could be effective methodology with even two people. I have recently adjusted my thinking to a scrum team being a minimum of three, so what happened to two and what do you call it? For me I consider a group of two people working together a partnership - there is value in having a partnership, but some of the dynamics and value that you get from having a team is lost with a partnership. Avoidance of a one on one confrontation The first dynamic I see missing in a partnership is the team motivation to do better and how this is delivered to individuals that are not performing. Take two highly motivated individuals and put them together and you will typically see them continue to perform. Now take a situation where you have two individuals, one performing and one not and the behaviour is totally different compared to a team of three or more individuals. With two people, if one feels the other is not performing it becomes a one on one confrontation. Most people avoid confrontations and so nothing changes. Compare this to a situation where you have three people in a team, 2 performing and 1 not the dynamic is totally different, it is no longer a personal one on one confrontation but a team concern and people seem more willing to encourage the individual not performing and express their dissatisfaction as a team if they do not improve. Avoiding the effects of Tuckman’s Group Development Theory If you are not familiar with Tuckman’s group development theory give it a read (http://en.wikipedia.org/wiki/Tuckman's_stages_of_group_development) In a nutshell with Tuckman’s theory teams go through these stages of Forming, Storming, Norming & Performing. You want your team to reach and remain in the Performing stage for as long as possible - this is where you get the most value. When you have a partnership of two and you change the individuals in the partnership you basically do a hard reset on the partnership and go back to the beginning of Tuckman’s model each time. This has a major effect on the performance of a team and what they can deliver. What I have seen is that you reduce the effects of Tuckman's theory the more individuals you have in the team (until you hit the maximum team size in which other problems kick in). While you will still experience Tuckman's theory with a team of three, the impact will be greatly reduced compared to two where it is guaranteed every time a change occurs. It's not just in the numbers, it's in the people One final comment - while the actual numbers of a team do play a role, the individuals in the team are even more important - ideally you want to keep individuals working together for an extended period. That doesn't mean that you never change the individuals in a team, or that once someone joins a team they are stuck there - there is value in an individual moving from team to team and getting cross pollination, but the period of time that an individual moves should be in month's or years, not days or weeks. Why? So why is it important to know this? Why is it important to know how a team works and what motivates them? I have been asking myself this question for a while and where I am at right now is this… the aim is to achieve the stage where the sum of the total (team) is greater than the sum of the parts (team members). This is why we form teams and why understanding how they work is a challenge and also extremely stimulating.

    Read the article

  • PowerShell and SMO – be careful how you iterate

    - by Fatherjack
    I’ve yet to have a totally smooth experience with PowerShell and it was late on Friday when I crashed into this problem. I haven’t investigated if this is a generally well understood circumstance and if it is then I apologise for repeating everything. Scenario: I wanted to scan a number of server for many properties, including existing logins and to identify which accounts are bestowed with sysadmin privileges. A great task to pass to PowerShell, so with a heavy heart I started up PowerShellISE and started typing. The script doesn’t come easily to me but I follow the logic of SMO and the properties and methods available with the language so it seemed something I should be able to master. Version #1 of my script. And the results it returns when executed against my home laptop server. These results looked good and for a long time I was concerned with other parts of the script, for all intents and purposes quite happy that this was an accurate assessment of the server. Let’s just review my logic for each step of the code at the top. Lines 1 to 7 just set up our variables and write out the header message Line 8 our first loop, to go through each login on the server Line 10 an inner loop that will assess each role name that each login has been assigned Line 11 a test to see if each role has the name ‘sysadmin’ Line 13 write out the login name with a bright format as it is a sysadmin login Line 17 write out the login name with no formatting It is quite possible that here someone with more PowerShell experience than me will be shouting at their screen pointing at the error I made but to me this made total sense. Until I altered the code, I altered lines 6 and 7 of code above to be: $c = $Svr.Logins.Count write-host “There are $c Logins on the server” This changed my output to look like this: This started alarm bells ringing – there are clearly not 13 logins listed So, let’s see where things are going wrong, edit the script so it looks like this. I’ve highlighted the changes to make Running this code shows me these results Our $n variable should count up by one for each login returned and We are clearly missing some logins. I referenced this list back to Management Studio for my server and see the Logins as below, where there are clearly 13 logins. We see a Login called Annette in SSMS but not in the script results so I opened that up and looked at its properties and it’s server roles in particular. The account has only public access to the server. Inspection of the other logins that the PowerShell script misses out show they too are only members of the public role. Right now I can’t work out whether there is a good reason for this and if it should be expected behaviour or not. Please spend a few minutes to leave a comment if you have an opinion or theory for this. How to get the full list of logins. Clearly I needed to get a full list of the logins so set about reviewing my code to see if there was a better way to iterate through the roles for each login. This is the code that I came up with and I think it is doing everything that I need it to. It gives me the expected results like this: So it seems that the ListMembers() method is the trouble maker in my first versions of the code. I would have expected that ListMembers should return Logins that are only members of the public role, certainly Technet makes no reference to it being left out in it’s Login.ListMembers details. Suffice to say, it’s a lesson learned and I will approach using it with caution in future circumstances.

    Read the article

  • SQL to select random mix of rows fairly [migrated]

    - by Matt Sieker
    Here's my problem: I have a set of tables in a database populated with data from a client that contains product information. In addition to the basic product information, there is also information about the manufacturer, and categories for those products (a product can be in one or more categories). These categories are then referred to as "Product Categories", and which stores these products are available at. These tables are updated once a week from a feed from the customer. Since for our purposes, some of the product categories are the same, or closely related for our purposes, there is another level of categories called "General Categories", a general category can have one or more product categories. For the scope of these tables, here's some rough numbers: Data Tables: Products: 475,000 Manufacturers: 1300 Stores: 150 General Categories: 245 Product Categories: 500 Mapping Tables: Product Category -> Product: 655,000 Stores -> Products: 50,000,000 Now, for the actual problem: As part of our software, we need to select n random products, given a store and a general category. However, we also need to ensure a good mix of manufacturers, as in some categories, a single manufacturer dominates the results, and selecting rows at random causes the results to strongly favor that manufacturer. The solution that is currently in place, works for most cases, involves selecting all of the rows that match the store and category criteria, partition them on manufacturer, and include their row number from within their partition, then select from that where the row number for that manufacturer is less than n, and use ROWCOUNT to clamp the total rows returned to n. This query looks something like this: SET ROWCOUNT 6 select p.Id, GeneralCategory_Id, Product_Id, ISNULL(m.DisplayName, m.Name) AS Vendor, MSRP, MemberPrice, FamilyImageName from (select p.Id, gc.Id GeneralCategory_Id, p.Id Product_Id, ctp.Store_id, Manufacturer_id, ROW_NUMBER() OVER (PARTITION BY Manufacturer_id ORDER BY NEWID()) AS 'VendorOrder', MSRP, MemberPrice, FamilyImageName from GeneralCategory gc inner join GeneralCategoriesToProductCategories gctpc ON gc.Id=gctpc.GeneralCategory_Id inner join ProductCategoryToProduct pctp on gctpc.ProductCategory_Id = pctp.ProductCategory_Id inner join Product p on p.Id = pctp.Product_Id inner join StoreToProduct ctp on p.Id = ctp.Product_id where gc.Id = @GeneralCategory and ctp.Store_id=@StoreId and p.Active=1 and p.MemberPrice >0) p inner join Manufacturer m on m.Id = p.Manufacturer_id where VendorOrder <=6 order by NEWID() SET ROWCOUNT 0 (I've tried to somewhat format it to make it cleaner, but I don't think it really helps) Running this query with an execution plan shows that for the majority of these tables, it's doing a Clustered Index Seek. There are two operations that take up roughly 90% of the time: Index Seek (Nonclustered) on StoreToProduct: 17%. This table just contains the key of the store, and the key of the product. It seems that NHibernate decided not to make a composite key when making this table, but I'm not concerned about this at this point, as compared to the other seek... Clustered Index Seek on Product: 69%. I really have no clue how I could make this one more performant. On categories without a lot of products, performance is acceptable (<50ms), however larger categories can take a few hundred ms, with the largest category taking 3s (which has about 170k products). It seems I have two ways to go from this point: Somehow optimize the existing query and table indices to lower the query time. As almost every expensive operation is already a clustered index scan, I don't know what could be done there. The inner query could be tuned to not return all of the possible rows for that category, but I am unsure how to do this, and maintain the requirements (random products, with a good mix of manufacturers) Denormalize this data for the purpose of this query when doing the once a week import. However, I am unsure how to do this and maintain the requirements. Does anyone have any input on either of these items?

    Read the article

  • Oracle at ARM TechCon

    - by Tori Wieldt
    ARM TechCon is a technical conference for hardware and software engineers, Oct. 30-Nov 1 in Santa Clara, California. Days two and three of the conference will be geared towards systems designers and software developers, those interested in building ARM processor-based modules, boards, and systems. It will cover all of the hardware and software, tools, ranging from low-power design, networking and connectivity, open source software, and security. Oracle is a sponsor of ARM TechCon, and will present three Java sessions and a hands-on-lab:  "Do You Like Coffee with Your Dessert? Java and the Raspberry Pi" - The Raspberry Pi, an ARM-powered single board computer running a full Linux distro off an SD card has caused a huge wave of interest among developers. This session looks at how Java can be used on a device such as this. Using Java SE for embedded devices and a port of JavaFX, the presentation includes a variety of demonstrations of what the Raspberry Pi is capable of. The Raspberry Pi also provides GPIO line access, and the session covers how this can be used from Java applications. Prepare to be amazed at what this tiny board can do. (Angela Caicedo, Java Evangelist) "Modernizing the Explosion of Advanced Microcontrollers with Embedded Java" - This session explains why Oracle Java ME Embedded is the right choice for building small, connected, and intelligent embedded solutions, such as industrial control applications, smart sensing, wireless connectivity, e-health, or general machine-to-machine (M2M) functionality---extending your business to new areas, driving efficiency, and reducing cost. The new Oracle Java ME Embedded product brings the benefits of Java technology to microcontroller platforms. It is a full-featured, complete, compliant software runtime with value-add features targeted to the embedded space and has the ability to interface with additional hardware components, remote manageability, and over-the-air software updates. It is accompanied by a feature-rich set of tools free of charge. (Fareed Suliman, Java Product Manager) "Embedded Java in Smart Energy and Healthcare" - This session covers embedded Java products and technologies that enable smart and connect devices in the Smart Energy and Healthcare/Medical industries. (speaker Kevin Lee) "Java SE Embedded Development on ARM Made Easy" - This Hands-on Lab aims to show that developers already familiar with the Java develop/debug/deploy lifecycle can apply those same skills to develop Java applications, using Java SE Embedded, on embedded devices. (speaker Jim Connors) In the Oracle booth #603, you can see the following demos: Industry Solutions with JavaThis exhibit consists of a number of industry solutions and how they can be powered by Java technology deployed on embedded systems.  Examples in consumer devices, home gateways, mobile health, smart energy, industrial control, and tablets all powered by applications running on the Java platform are shown.  Some of the solutions demonstrate the ability of Java to connect intelligent devices at the edge of the network to the datacenter or the cloud as a total end-to-end platform.Java in M2M with QualcommThis station will exhibit a new M2M solutions platform co-developed by Oracle and Qualcomm that enables wireless communications for embedded smart devices powered by Java, and share the types of industry solutions that are possible.  In addition, a new platform for wearable devices based on the ARM Cortex M3 platform is exhibited.Why Java for Embedded?Demonstration platforms will show how traditional development environments, tools, and Java programming skills can be used to create applications for embedded devices.  The advantages that Java provides because of  the runtime's abstraction of software from hardware, modularity and scalability, security, and application portability and manageability are shared with attendees. Drop by and see why Java is an optimal applications platform for embedded systems.

    Read the article

  • Bios Memory settings and Virtualization + Ubuntu (Unofficial Answers Welcome) [closed]

    - by TardisGuy
    Attempting to optimize my (Main Windowless) Ubuntu system for my uses I will detail questions below, I understand this might be the wrong place to ask these questions. If so, my apologies and I thank you so much for your patience. Thanks to all the volenteers that have helped me learn ubuntu over the years (Since 5.10) This is a "short" list of questions I have been trying to figure out for some time. If you feel you can answer one but not another, that's already more than I could ask for. I have wrote this up in a format for easy navigation to important points Hopefully to less annoy your eyes. You're welcome :) or i'm sorry i annoy you. :( If you would be so kind, Please format answers as follows: question 1: _ _ _ _ _ or question 1-a: _ _ _ _ _ If you want to simply link me to relevant information, rather than type up something really detailed; that would be more than awesome! Memory Specific Questions Goal: Maximizing memory bandwith to better perform in Virtualization, and Large file compression. (Possible conflict?) Ganged vs Unganged "which is better?"** is relative, i know. But what about ganged vs unganged - With or without Bank/channel interleaving? a: Speculation - If i understand correctly, "channel interleaving has something to do with using both channels to read or write in a kind of "striping" pattern, as opposed to a standard half duplex operation.(probably wrong) but wouldn't ganged channels make this irrelevant? Memory Interleaving(bank). Does it have a down side? Does it require a ratio of clocks? (If I run 4x4gig ddr3) a. If im reading correctly(trying to learn), this is designed to spread operations between latency cycles to work around the higher latency of "normal" operation. b. However it seems to me that it has to be: divisible by fractions of a master clock? So if i run memory at 1333mhz, then the mean between 2 (physical) banks would operate every (roughly) 600Mhz? Warning! Possibly utter nonsense: (1333/2 interleaving to act like 1 memory module per 2 sticks of a total of 4 sticks, meaning 2x channels@4) c. which makes me wonder if there would be left over clock cycles the system would have to... "truncate/balance" or something? But I'm certain theres a feature somewhere i don't understand. Virtualization Questions AMD-V - Option of IOMMU Turned it on, why do i have extra option of "64MB"? If IOMMU is on, but "64MB" is "disabled", Is it on? (have scoured google, I still dont know) a. I think i understand that its supposed to (kind of) "set aside" a part of ram to act as a faster interactive zone for "stuff" (usb, Graphics, and... what?) b. I am using Nvidia graphics on AMD (Used kernel option "iommu=pt iommu=1, pt "passthrough"? No idea what they do, found it on google to solve boot up issue) c. Will this option help me use low latency sound hardware, like my midi keyboard? Can you recommend any additional tweaks? a. sysctl settings? b. swap settings? Grats, youve reached the end. Thanks for Reading.

    Read the article

  • concurrency::extent<N> from amp.h

    - by Daniel Moth
    Overview We saw in a previous post how index<N> represents a point in N-dimensional space and in this post we'll see how to define the N-dimensional space itself. With C++ AMP, an N-dimensional space can be specified with the template class extent<N> where you define the size of each dimension. From a look and feel perspective, you'd expect the programmatic interface of a point type and size type to be similar (even though the concepts are different). Indeed, exactly like index<N>, extent<N> is essentially a coordinate vector of N integers ordered from most- to least- significant, BUT each integer represents the size for that dimension (and hence cannot be negative). So, if you read the description of index, you won't be surprised with the below description of extent<N> There is the rank field returning the value of N you passed as the template parameter. You can construct one extent from another (via the copy constructor or the assignment operator), you can construct it by passing an integer array, or via convenience constructor overloads for 1- 2- and 3- dimension extents. Note that the parameterless constructor creates an extent of the specified rank with all bounds initialized to 0. You can access the components of the extent through the subscript operator (passing it an integer). You can perform some arithmetic operations between extent objects through operator overloading, i.e. ==, !=, +=, -=, +, -. There are operator overloads so that you can perform operations between an extent and an integer: -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, –= and, finally, there are additional overloads for plus and minus (+,-) between extent<N> and index<N> objects, returning a new extent object as the result. In addition to the usual suspects, extent offers a contains function that tests if an index is within the bounds of the extent (assuming an origin of zero). It also has a size function that returns the total linear size of this extent<N> in units of elements. Example code extent<2> e(3, 4); _ASSERT(e.rank == 2); _ASSERT(e.size() == 3 * 4); e += 3; e[1] += 6; e = e + index<2>(3,-4); _ASSERT(e == extent<2>(9, 9)); _ASSERT( e.contains(index<2>(8, 8))); _ASSERT(!e.contains(index<2>(8, 9))); grid<N> Our upcoming pre-release bits also have a similar type to extent, grid<N>. The way you create a grid is by passing it an extent, e.g. extent<3> e(4,2,6); grid<3> g(e); I am not going to dive deeper into grid, suffice for now to think of grid<N> simply as an alias for the extent<N> object, that you create when you encounter a function that expects a grid object instead of an extent object. Usage The extent class on its own simply defines the size of the N-dimensional space. We'll see in future posts that when you create containers (arrays) and wrappers (array_views) for your data, it is an extent<N> object that you'll need to use to create those (and use an index<N> object to index into them). We'll also see that it is a grid<N> object that you pass to the new parallel_for_each function that I'll cover in the next post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Not enough free disk space

    - by carmatt95
    I'm new to Ubuntu and I'm getting an error in software updater. When I try and do my daily updates, it says: The upgrade needs a total of 25.3 M free space on disk /boot. Please free at least an additional 25.3 M of disk space on /boot. Empty your trash and remove temporary packages of former installations using sudo apt-get clean. I tried typing in sudo apt-get clean into the terminal but I still get the message. All of the pages I read seem to be for experianced Ubuntuers. Any help would be appreciated. I'm running Ubuntu 12.10. I want to upgrade to 13.04 but understand I have to finish these first. EDIT: @Alaa, This is the output from typing in cat /etc/fstab into the terminal: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/ubuntu-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=fa55c082-112d-4b10-bcf3-e7ffec6cebbc /boot ext2 defaults 0 2 /dev/mapper/ubuntu-swap_1 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 matty@matty-G41M-ES2L:~$ df -h: Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu-root 915G 27G 842G 4% / udev 984M 4.0K 984M 1% /dev tmpfs 397M 1.1M 396M 1% /run none 5.0M 0 5.0M 0% /run/lock none 992M 1.8M 990M 1% /run/shm none 100M 52K 100M 1% /run/user /dev/sda1 228M 222M 0 100% /boot matty@matty-G41M-ES2L:~$ dpkg -l | grep linux-image: ii linux-image-3.5.0-17-generic 3.5.0-17.28 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-18-generic 3.5.0-18.29 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-19-generic 3.5.0-19.30 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-21-generic 3.5.0-21.32 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-22-generic 3.5.0-22.34 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-23-generic 3.5.0-23.35 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-24-generic 3.5.0-24.37 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-25-generic 3.5.0-25.39 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP ii linux-image-3.5.0-26-generic 3.5.0-26.42 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP iF linux-image-3.5.0-28-generic 3.5.0-28.48 i386 Linux kernel image for version 3.5.0 on 32 bit x86 SMP

    Read the article

  • Representing Mauritius in the 2013 Bench Games

    Only by chance I came across an interesting option for professionals and enthusiasts in IT, and quite honestly I can't even remember where I caught attention of Brainbench and their 2013 Bench Games event. But having access to 600+ free exams in a friendly international intellectual competition doesn't happen to be available every day. So, it was actually a no-brainer to sign up and browse through the various categories. Most interestingly, Brainbench is not only IT-related. They offer a vast variety of fields in their Test Center, like Languages and Communication, Office Skills, Management, Aptitude, etc., and it can be a little bit messy about how things are organised. Anyway, while browsing through their test offers I added a couple of exams to 'My Plan' which I would give a shot afterwards. Self-assessments Actually, I took the tests based on two major aspects: 'Fun Factor' and 'How good would I be in general'... Usually, you have to pay for any kind of exams and given this unique chance by Brainbench to simply train this kind of tests was already worth the time. Frankly speaking, the tests are very close to the ones you would be asked to do at Prometric or Pearson Vue, ie. Microsoft exams, etc. Go through a set of multiple choice questions in a given time frame. Most of the tests I did during the Bench Games were based on 40 questions, each with a maximum of 3 minutes to answer. Ergo, one test in maximum 2 hours - that sounds feasible, doesn't it? The Measure of Achievement While the 2013 Bench Games are considered a worldwide friendly competition of knowledge I was really eager to get other Mauritians attracted. Using various social media networks and community activities it all looked quite well at the beginning. Mauritius was listed on rank #19 of Most Certified Citizens and rank #10 of Most Master Level Certified Nation - not bad, not bad... Until... the next update of the Bench Games Leaderboard. The downwards trend seemed to be unstoppable and I couldn't understand why my results didn't show up on the Individual Leader Board. First of all, I passed exams that were not even listed and second, I had better results on some exams listed. After some further information from the organiser it turned out that my test transcript wasn't available to the public. Only then results are considered and counted in the competition. During that time, I actually managed to hold 3 test results on the Individuals... Other participants were merciless, eh, more successful than me, produced better test results than I did. But still I managed to stay on the final score board: An 'exotic' combination of exam, test result, country and person itself Representing Mauritius and the Visual FoxPro community in that fun event. And although I mainly develop in Visual FoxPro 9.0 SP2 and C# using .NET Framework from 2.0 to 4.5 since a couple of years I still managed to pass on Master Level. Hm, actually my Microsoft Certified Programmer (MCP) exams are dated back in June 2004 - more than 9 years ago... Look who got lucky... As described above I did a couple of exams as time allowed and without any preparations, but still I received the following mail notification: "Thank you for recently participating in our Bench Games event.  We wanted to inform you that you obtained a top score on our test(s) during this event, and as a result, will receive a free annual Brainbench subscription.  Your annual subscription will give you access to all our tests just like Bench Games, but for an entire year plus additional benefits!" -- Leader Board Notification from Brainbench Even fun activities get rewarded sometimes. Thanks to @Brainbench_com for the free annual subscription based on my passed 2013 Bench Games Master Level exam. It would be interesting to know about the total figures, especially to see how many citizens of Mauritius took part in this year's Bench Games. Anyway, I'm looking forward to be able to participate in other challenges like this in the future.

    Read the article

  • Finalists for Community Manager of the Year Announced

    - by Mike Stiles
    For as long as brand social has been around, there’s still an amazing disparity from company to company on the role of Community Manager. At some brands, they are the lead social innovators. At others, the task has been relegated to interns who are at the company temporarily. Some have total autonomy and trust. Others must get chain-of-command permission each time they engage. So what does a premiere “worth their weight in gold” Community Manager look like? More than anyone else in the building, they have the most intimate knowledge of who the customer is. They live on the front lines and are the first to detect problems and opportunities. They are sincere, raving fans of the brand themselves and are trusted advocates for the others. They’re fun to be around. They aren’t salespeople. Give me one Community Manager who’s been at the job 6 months over 5 focus groups any day. Because not unlike in speed dating, they must immediately learn how to make a positive, lasting impression on fans so they’ll want to return and keep the relationship going. They’re informers and entertainers, with a true belief in the value of the brand’s proposition. Internally, they live at the mercy of the resources allocated toward social. Many, whose managers don’t understand the time involved in properly curating a community, are tasked with 2 or 3 too many of them. 63% of CM’s will spend over 30 hours a week on one community. They come to intuitively know the value of the relationships they’re building, even if they can’t always be shown in a bar graph to the C-suite. Many must communicate how the customer feels to executives that simply don’t seem to want to hear it. Some can get the answers fans want quickly, others are frustrated in their ability to respond within an impressive timeframe. In short, in a corporate world coping with sweeping technological changes, amidst business school doublespeak, pie charts, decks, strat sessions and data points, the role of the Community Manager is the most…human. They are the true emotional connection to the real life customer. Which is why we sought to find a way to recognize and honor who they are, what they do, and how well they have defined the position as social grows and integrates into the larger organization. Meet our 3 finalists for Community Manager of the Year. Jeff Esposito with VistaprintJeff manages and heads up content strategy for all social networks and blogs. He also crafts company-wide policies surrounding the social space. Vistaprint won the NEDMA Gold Award for Twitter Strategy in 2010 and 2011, and a Bronze in 2011 for Social Media Strategy. Prior to Vistaprint, Jeff was Media Relations Manager with the Long Island Ducks. He graduated from Seton Hall University with a BA in English and a minor in Classical Studies. Stacey Acevero with Vocus In addition to social management, Stacey blogs at Vocus on influential marketing and social media, and blogs at PRWeb on public relations and SEO. She’s been named one of the #Nifty50 Women in Tech on Twitter 2 years in a row, as well as included in the 15 up-and-coming PR pros to watch in 2012. Carly Severn with the San Francisco BalletCarly drives engagement, widens the fanbase and generates digital content for America’s oldest professional ballet company. Managed properties include Facebook, Twitter, Tumblr, Pinterest, Instagram, YouTube and G+. Prior to joining the SF Ballet, Carly was Marketing & Press Coordinator at The Fitzwilliam Museum at Cambridge, where she graduated with a degree in English. We invite you to join us at the first annual Oracle Social Media Summit November 14 and 15 at the Wynn in Las Vegas where our finalists will be featured. Over 300 top brand marketers, agency executives, and social leaders & innovators will be exploring how social is transforming business. Space is limited and the information valuable, so get more info and get registered as soon as possible at the event site.

    Read the article

  • Dynamically load and call delegates based on source data

    - by makerofthings7
    Assume I have a stream of records that need to have some computation. Records will have a combination of these functions run Sum, Aggregate, Sum over the last 90 seconds, or ignore. A data record looks like this: Date;Data;ID Question Assuming that ID is an int of some kind, and that int corresponds to a matrix of some delegates to run, how should I use C# to dynamically build that launch map? I'm sure this idea exists... it is used in Windows Forms which has many delegates/events, most of which will never actually be invoked in a real application. The sample below includes a few delegates I want to run (sum, count, and print) but I don't know how to make the quantity of delegates fire based on the source data. (say print the evens, and sum the odds in this sample) using System; using System.Threading; using System.Collections.Generic; internal static class TestThreadpool { delegate int TestDelegate(int parameter); private static void Main() { try { // this approach works is void is returned. //ThreadPool.QueueUserWorkItem(new WaitCallback(PrintOut), "Hello"); int c = 0; int w = 0; ThreadPool.GetMaxThreads(out w, out c); bool rrr =ThreadPool.SetMinThreads(w, c); Console.WriteLine(rrr); // perhaps the above needs time to set up6 Thread.Sleep(1000); DateTime ttt = DateTime.UtcNow; TestDelegate d = new TestDelegate(PrintOut); List<IAsyncResult> arDict = new List<IAsyncResult>(); int count = 1000000; for (int i = 0; i < count; i++) { IAsyncResult ar = d.BeginInvoke(i, new AsyncCallback(Callback), d); arDict.Add(ar); } for (int i = 0; i < count; i++) { int result = d.EndInvoke(arDict[i]); } // Give the callback time to execute - otherwise the app // may terminate before it is called //Thread.Sleep(1000); var res = DateTime.UtcNow - ttt; Console.WriteLine("Main program done----- Total time --> " + res.TotalMilliseconds); } catch (Exception e) { Console.WriteLine(e); } Console.ReadKey(true); } static int PrintOut(int parameter) { // Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " Delegate PRINTOUT waited and printed this:"+parameter); var tmp = parameter * parameter; return tmp; } static int Sum(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static int Count(int parameter) { Thread.Sleep(5000); // Pretend to do some math... maybe save a summary to disk on a separate thread return parameter; } static void Callback(IAsyncResult ar) { TestDelegate d = (TestDelegate)ar.AsyncState; //Console.WriteLine("Callback is delayed and returned") ;//d.EndInvoke(ar)); } }

    Read the article

  • HERMES Medical Solutions Helps Save Lives with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES Medical Solutions was established in 1976 in Stockholm, Sweden, and is a leading innovator in medical imaging hardware/software products for health care facilities worldwide. HERMES delivers a plethora of different medical imaging solutions to optimize hospital workflow. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES advanced algorithms make it possible to detect the smallest changes under therapies important and necessary to optimize different therapeutic methods and doses. Challenges Fighting illness & disease requires state-of-the-art imaging modalities and software in order to diagnose accurately, stage disease appropriately and select the best treatment available. Selecting and implementing a new database platform that would deliver the needed performance, reliability, security and flexibility required by the high-end medical solutions offered by HERMES. Solution Decision to migrate from in-house database to an embedded SQL database powering the HERMES products, delivered either as software, integrated hardware and software solutions, or via the cloud in a software-as-a-service configuration. Evaluation of several databases and selection of MySQL based on its high performance, ease of use and integration, and low Total Cost of Ownership. On average, between 4 and 12 Terabytes of data are stored in MySQL databases underpinning the HERMES solutions. The data generated by each medical study is indeed stored during 10 years or more after the treatment was performed. MySQL-based HERMES systems also allow doctors worldwide to conduct new drug research projects leveraging the large amount of medical data collected. Hospitals and other HERMES customers worldwide highly value the “zero administration” capabilities and reliability of MySQL, enabling them to perform medical analysis without any downtime. Relying on MySQL as their embedded database, the HERMES team has been able to increase their focus on further developing their clinical applications. HERMES Medical Solutions could leverage the Oracle Financing payment plan to spread its investment over time and make the MySQL choice even more valuable. “MySQL has proven to be an excellent database choice for us. We offer high-end medical solutions, and MySQL delivers the reliability, security and performance such solutions require.” Jan Bertling, CEO.

    Read the article

  • Styling specific columns and rows

    - by hattenn
    I'm trying to style some specific parts of a 5x4 table that I create. It should be like this: Every even numbered row and every odd numbered row should get a different color. Text in the second, third, and fourth columns should be centered. I have this table: <table> <caption>Some caption</caption> <colgroup> <col> <col class="value"> <col class="value"> <col class="value"> </colgroup> <thead> <tr> <th id="year">Year</th> <th>1999</th> <th>2000</th> <th>2001</th> </tr> </thead> <tbody> <tr class="oddLine"> <td>Berlin</td> <td>3,3</td> <td>1,9</td> <td>2,3</td> </tr> <tr class="evenLine"> <td>Hamburg</td> <td>1,5</td> <td>1,3</td> <td>2,0</td> </tr> <tr class="oddLine"> <td>München</td> <td>0,6</td> <td>1,1</td> <td>1,0</td> </tr> <tr class="evenLine"> <td>Frankfurt</td> <td>1,3</td> <td>1,6</td> <td>1,9</td> </tr> </tbody> <tfoot> <tr class="oddLine"> <td>Total</td> <td>6,7</td> <td>5,9</td> <td>7,2</td> </tr> </tfoot> </table> And I have this CSS file: table, th, td { border: 1px solid black; border-collapse: collapse; padding: 0px 5px; } #year { text-align: left; } .oddLine { background-color: #DDDDDD; } .evenLine { background-color: #BBBBBB; } .value { text-align: center; } And this doesn't work. The text in the columns are not centered. What is the problem here? And is there a way to solve it (other than changing the class of all the cells that I want centered)? P.S.: I think there's some interference with .evenLine and .oddLine classes. Because when I put "background: black" in the class "value", it changes the background color of the columns in the first row. The thing is, if I delete those two classes, text-align still doesn't work, but background attribute works perfectly. Argh...

    Read the article

  • Oracle Applications Cloud Release 8 Customization: Your User Interface, Your Text

    - by ultan o'broin
    Introducing the User Interface Text Editor In Oracle Applications Cloud Release 8, there’s an addition to the customization tool set, called the User Interface Text Editor  (UITE). When signed in with an application administrator role, users launch this new editing feature from the Navigator's Tools > Customization > User Interface Text menu option. See how the editor is in there with other customization tools? User Interface Text Editor is launched from the Navigator Customization menu Applications customers need a way to make changes to the text that appears in the UI, without having to initiate an IT project. Business users can now easily change labels on fields, for example. Using a composer and activated sandbox, these users can take advantage of the Oracle Metadata Services (MDS), add a key to a text resource bundle, and then type in their preferred label and its description (as a best practice for further work, I’d recommend always completing that description). Changing a simplified UI field label using Oracle Composer In Release 8, the UITE enables business users to easily change UI text on a much wider basis. As with composers, the UITE requires an activated sandbox where users can make their changes safely, before committing them for others to see. The UITE is used for editing UI text that comes from Oracle ADF resource bundles or from the Message Dictionary (or FND_MESSAGE_% tables, if you’re old enough to remember such things). Functionally, the Message Dictionary is used for the text that appears in business rule-type error, warning or information messages, or as a text source when ADF resource bundles cannot be used. In the UITE, these Message Dictionary texts are referred to as Multi-part Validation Messages.   If the text comes from ADF resource bundles, then it’s categorized as User Interface Text in the UITE. This category refers to the text that appears in embedded help in the UI or in simple error, warning, confirmation, or information messages. The embedded help types used in the application are explained in an Oracle Fusion Applications User Experience (UX) design pattern set. The message types have a UX design pattern set too. Using UITE  The UITE enables users to search and replace text in UI strings using case sensitive options, as well as by type. Users select singular and plural options for text changes, should they apply. Searching and replacing text in the UITE The UITE also provides users with a way to preview and manage changes on an exclusion basis, before committing to the final result. There might, for example, be situations where a phrase or word needs to remain different from how it’s generally used in the application, depending on the context. Previewing replacement text changes. Changes can be excluded where required. Multi-Part Messages The Message Dictionary table architecture has been inherited from Oracle E-Business Suite days. However, there are important differences in the Oracle Applications Cloud version, notably the additional message text components, as explained in the UX Design Patterns. Message Dictionary text has a broad range of uses as indicated, and it can also be reserved for internal application use, for use by PL/SQL and C programs, and so on. Message Dictionary text may even concatenate together at run time, where required. The UITE handles the flexibility of such text architecture by enabling users to drill down on each message and see how it’s constructed in total. That way, users can ensure that any text changes being made are consistent throughout the different message parts. Multi-part (Message Dictionary) message components in the UITE Message Dictionary messages may also use supportability-related numbers, the ones that appear appended to the message text in the application’s UI. However, should you have the requirement to remove these numbers from users' view, the UITE is not the tool for the job. Instead, see my blog about using the Manage Messages UI.

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • How to handle multi-processing of libraries which already spawn sub-processes?

    - by exhuma
    I am having some trouble coming up with a good solution to limit sub-processes in a script which uses a multi-processed library and the script itself is also multi-processed. Both, the library and script are modifiable by us. I believe the question is more about design than actual code, but for what it's worth, it's written in Python. The goal of the library is to hide implementation details of various internet routers. For that reason, the library has a "Proxy" factory method which takes the IP of a router as parameter. The factory then probes the device using a set of possible proxies. Usually, there is one proxy which immediately knows that is is able to send commands to this device. All others usually take some time to return (given a timeout). One thought was already to simply query the device for an identifier, and then select the proper proxy using that, but in order to do so, you would already need to know how to query the device. Abstracting this knowledge is one of the main purposes of the library, so that becomes a little bit of a "circular-requirement"/deadlock: To connect to a device, you need to know what proxy to use, and to know what proxy to create, you need to connect to a device. So probing the device is - as we can see - the best solution so far, apart from keeping a lookup-table somewhere. The library currently kills all remaining processes once a valid proxy has been found. And yes, there is always only one good proxy per device. Currently there are about 12 proxies. So if one create a proxy instance using the factory, 12 sub-processes are spawned. So far, this has been really useful and worked very well. But recently someone else wanted to use this library to "broadcast" a command to all devices. So he took the library, and wrote his own multi-processed script. This obviously spawned 12 * n processes where n is the number of IPs to which he broadcasted. This has given us two problems: The host on which the command was executed slowed down to a near halt. Aborting the script with CTRL+C ground the system to a total halt. Not even the hardware console responded anymore! This may be due to some Python strangeness which still needs to be investigated. Maybe related to http://bugs.python.org/issue8296 The big underlying question, is how to design a library which does multi-processing, so other applications which use this library and want to be multi-processed themselves do not run into system limitations. My first thought was to require a pool to be passed to the library, and execute all tasks in that pool. In that way, the person using the library has control over the usage of system resources. But my gut tells me that there must be a better solution. Disclaimer: My experience with multiprocessing is fairly limited. I have implemented a few straightforward which did not require access control to resources. So I have not yet any practical experience with semaphores or mutexes. p.s.: In the future, we may have enough information to do this without the probing. But the database which would contain the proper information is not yet operational. Also, the design about multiprocessing a multiprocessed library intrigues me :)

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >