Search Results

Search found 3024 results on 121 pages for 'growth rate'.

Page 96/121 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • 7 Steps To Cut Recruiting Costs & Drive Exceptional Business Results

    - by Oracle Accelerate for Midsize Companies
    By Steve Viarengo, Vice President Product Management, Oracle Taleo Cloud Services  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 In good times, trimming operational costs is an ongoing goal. In tough times, it’s a necessity. In both good times and bad, however, recruiting occurs. Growth increases headcount in good times, and opportunistic or replacement hiring occurs in slow business cycles. By employing creative recruiting strategies in tandem with the latest technology developments, you can reduce recruiting costs while driving exceptional business results. Here are some critical areas to focus on. 1.  Target Direct Cost Savings Total recruiting process expenses are the sum of external costs plus internal labor costs. Most organizations can reduce recruiting expenses with direct cost savings. While additional savings on indirect costs can be realized from process improvement and efficiency gains, there are direct cost savings and benefits readily available in three broad areas: sourcing, assessments, and green recruiting. 2. Sourcing: Reduce Agency Costs Agency search firm fees can amount to 35 percent of a new employee’s annual base salary. Typically taken from the hiring department budget, these fees may not be visible to HR. By relying on internal mobility programs, referrals, candidate pipelines, and corporate career Websites, organizations can reduce or eliminate this agency spend. And when you do have to pay third-party agency fees, you can optimize the value you receive by collaborating with agencies to identify referred candidates, ensure access to candidate data and history, and receive automatic notifications and correspondence. 3. Sourcing: Reduce Advertising Costs You can realize significant cost reductions by placing all job positions on your corporate career Website. This will allow you to reap a substantial number of candidates at minimal cost compared to job boards and other sourcing options. 4.  Sourcing: Internal Talent Pool Internal talent pools provide a way to reduce sourcing and advertising costs while delivering improved productivity and retention. Internal redeployment reduces costs and ramp-up time while increasing retention and employee satisfaction. 5.  Sourcing: External Talent Pool Strategic recruiting requires identifying and matching people with a given set of skills to a particular job while efficiently allocating sourcing expenditures. By using an e-recruiting system (which drives external talent pool management) with a candidate relationship database, you can automate prescreening and candidate matching while communicating with targeted candidates. Candidate relationship management can lower sourcing costs by marketing new job opportunities to candidates sourced in the past. By mining the talent pool in this fashion, you eliminate the need to source a new pool of candidates for each new requisition. Managing and mining the corporate candidate database can reduce the sourcing cost per candidate by as much as 50 percent. 6.  Assessments: Reduce Turnover Costs By taking advantage of assessments during the recruitment process, you can achieve a range of benefits, including better productivity, superior candidate performance, and lower turnover (providing considerable savings). Assessments also save recruiter and hiring manager time by focusing on a short list of qualified candidates. Hired for fit, such candidates tend to stay with the organization and produce quality work—ultimately driving revenue.  7. Green Recruiting: Reduce Paper and Processing Costs You can reduce recruiting costs by automating the process—and making it green. A paperless process informs candidates that you’re dedicated to green recruiting. It also leads to direct cost savings. E-recruiting reduces energy use and pollution associated with manufacturing, transporting, and recycling paper products. And process automation saves energy in mailing, storage, handling, filing, and reporting tasks. Direct cost savings come from reduced paperwork related to résumés, advertising, and onboarding. Improving the recruiting process through sourcing, assessments, and green recruiting not only saves costs. It also positions the company to improve the talent base during the recession while retaining the ability to grow appropriately in recovery. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • DRBD setup problem

    - by cuthieu
    I'm so new to DRBD, please help me fixing the problem below. Enclosed my drbd.conf. Many thanks [root@skonkwerks1 ~]# drbdadm create-md all open(/dev/hdb3) failed: No such file or directory Command 'drbdmeta /dev/drbd0 v08 /dev/hdb3 internal create-md' terminated with exit code 20 drbdsetup exited with code 20 [root@skonkwerks1 ~]# vi /etc/drbd.conf global { usage-count no; } resource repdata { protocol C; startup { wfc-timeout 0; degr-wfc-timeout 120; } disk { on-io-error detach; } # or panic # net { cram-hmac-alg "hdd1"; shared-secret "testing"; } syncer { rate 10M; } on skonkwerks1 { device /dev/drbd0; disk /dev/hdb1; address 172.29.156.1:7788; meta-disk internal; } on skonkwerks2 { device /dev/drbd0; disk /dev/hdb1; address 172.29.156.2:7788; meta-disk internal; } }

    Read the article

  • Nginx : Proper use of limit_req_zone and limit_req

    - by xperator
    I have 2 website running on VPS. Their purpose is sharing music files and publishing news. Both of them use wordpress. What I am trying is that I want to prevent little hackers from flooding the webserver and putting stress on the server to make it crash. The problem is that after using limit_req_zone and limit_req my website became very slow. Browsing Wordpress control panel takes a long long time. I tried changing values but it didn't improve much. I guess the problem is Wordpress because it's the only script I am using on both front and back end. Here is the last setting which seems to be more responsive than others : limit_req_zone $binary_remote_addr zone=flood:5m rate=10r/m; location ~ \.php$ { limit_req zone=flood burst=100 nodelay; } What are the optimal values that should be used in my case (wp) ? I want the website have it's normal behavior, On the other hand stopping lifeless people from flooding. Another question, Is it safe and enough to use limit_req only on php files ?

    Read the article

  • Impressions and Reactions from Alliance 2012

    - by user739873
    Alliance 2012 has come to a conclusion.  What strikes me about every Alliance conference is the amazing amount of collaboration and cooperation I see across higher education in the sharing of best practices around the entire Oracle PeopleSoft software suite, not just the student information system (Oracle’s PeopleSoft Campus Solutions).  In addition to the vibrant U.S. organization, it's gratifying to see the growth in the international attendance again this year, with an EMEA HEUG organizing to complement the existing groups in the Netherlands, South Africa, and the U.K.  Their first meeting is planned for London in October, and I suspect they'll be surprised at the amount of interest and attendance. In my discussions with higher education IT and functional leadership at Alliance there were a number of instances where concern was expressed about Oracle's commitment to higher education as an industry, primarily because of a lack of perceived innovation in the applications that Oracle develops for this market. Here I think perception and reality are far apart, and I'd like to explain why I believe this to be true. First let me start with what I think drives this perception. Predominately it's in two areas. The first area is the user interface, both for students and faculty that interact with the system as "customers", and for those employees of the institution (faculty, staff, and sometimes students as well) that use the system in some kind of administrative role. Because the UI hasn't changed all that much from the PeopleSoft days, individuals perceive this as a dead product with little innovation and therefore Oracle isn't investing. The second area is around the integration of the higher education suite of applications (PeopleSoft Campus Solutions) and the rest of the Oracle software assets. Whether grown organically or acquired, there is an impressive array of middleware and other software products that could be leveraged much more significantly by the higher education applications than is currently the case today. This is also perceived as lack of investment. Let me address these two points.  First the UI.  More is being done here than ever before, and the PAG and other groups where this was discussed at Alliance 2012 were more numerous than I've seen in any past meeting. Whether it's Oracle development leveraging web services or some extremely early but very promising work leveraging the recent Endeca acquisition (see some cool examples here) there are a lot of resources aimed at this issue.  There are also some amazing prototypes being developed by our UX (user experience team) that will eventually make their way into the higher education applications realm - they had an impressive setup at Alliance.  Hopefully many of you that attended found this group. If not, the senior leader for that team Jeremy Ashley will be a significant contributor of content to our summer Industry Strategy Council meeting in Washington in June. In the area of integration with other elements of the Oracle stack, this is also an area of focus for the company and my team.  We're making this a priority especially in the areas of identity management and security, leveraging WebCenter more effectively for content, imaging, and mobility, and driving towards the ultimate objective of WebLogic Suite as our platform for SOA, links to learning management systems (SAIP), and content. There is also much work around business intelligence centering on OBI applications. But at the end of the day we get enormous value from the HEUG (higher education user group) and the various subgroups formed as a part of this community that help us align and prioritize our investments, whether it's around better integration with other Oracle products or integration with partner offerings.  It's one of the healthiest, mutually beneficial relationships between customers and an Education IT concern that exists on the globe. And I can't avoid mentioning that this kind of relationship between higher education and the corporate IT community that can truly address the problems of efficiency and effectiveness, institutional excellence (which starts with IT) and student success.  It's not (in my opinion) going to be solved through community source - cost and complexity only increase in that model and in the end higher education doesn't ultimately focus on core competencies: educating, developing, and researching.  While I agree with some of what Michael A. McRobbie wrote in his EDUCAUSE Review article (Information Technology: A View from Both Sides of the President’s Desk), I take strong issue with his assertion that the "the IT marketplace is just the opposite of long-term stability...."  Sure there has been healthy, creative destruction in the past 2-3 decades, but this has had the effect of, in the aggregate, benefiting education with greater efficiency, more innovation and increased stability as larger, more financially secure firms acquire and develop integrated solutions. Cole

    Read the article

  • Linux - preventing an application from failing due to lack of disk space [migrated]

    - by Jernej
    Due to an unpredicted scenario I am currently in need of finding a solution to the fact that an application (which I do not wish to kill) is slowly hogging the entire disk space. To give more context I have an application in Python that uses multiprocessing.Pool to start 5 threads. Each thread writes some data to its own file. The program is running on Linux and I do not have root access to the machine. The program is CPU intensive and has been running for months. It still has a few days to write all the data. 40% of the data in the files is redundant and can be removed after a quick test. The system on which the program is running only has 30GB of remaining disk space and at the current rate of work it will surely be hogged before the program finishes. Given the above points I see the following solutions with respective problems Given that the process number i is writing to file_i, is it safe to move file_i to an external location? Will the OS simply create a new instance of file_i and write to it? I assume moving the file would remove it and the process would end up writing to a "dead" file? Is there a "command line" way to stop 4 of the 5 spawned workers and wait until one of them finishes and then resume their work? (I am sure one single worker thread would avoid hogging the disk) Suppose I use CTRL+Z to freeze the main process. Will this stop all the other processes spawned by multiprocessing.Pool? If yes, can I then safely edit the files as to remove the redundant lines? Given the three options that I see, would any of them work in this context? If not, is there a better way to handle this problem? I would really like to avoid the scenario in which the program crashes just few days before its finish.

    Read the article

  • Best usage for a laptop being used as a desktop without removable batteries

    - by Senseful
    After reading the information on http://batteryuniversity.com, I realize that one of the best ways to permanently damage a lithium ion battery is to use the battery at a high temperature while it's fully charged. This is exactly what happens when you use the computer as if it were a desktop computer, since leaving it plugged in will keep the battery at 100% and using the computer will heat up the battery. This is why it's recommend to remove the battery from your laptop if you are using it is this scenario. My question is what would you do if the laptop doesn't have removable batteries (e.g. a MacBook Pro)? Should I use some kind of charge cycle such as: charge to 80%, unplug the power chord, use the laptop until it reaches 20%, then repeat the cycle by charging to 80% again? If so, which values should I use instead of 80% and 20%? (I think charging to 80% is better than 100% because of the damage that a hot battery at 100% can do, but I just made the figure 80% up, and I'm sure there's a better number to strive for which is backed by science.) I've read many of the articles on batteryuniversity.com, but couldn't find anything pertaining to this. Update: What about doing something like charge (or discharge) it to 50%, then plug it in and turn on settings which use the battery as much as possible (e.g. brightness all the way up, wi-fi on, etc.), in order to try to maintain the battery at 50% (i.e. the rate it is charging is the same as it is discharging). This will probably heat up the battery, but would make it so you don't need to constantly plug and unplug the laptop. The one bad thing is that you are taking up more charge cycles which would decrease the battery life, thus I'm not sure this is a good idea.

    Read the article

  • Taking the Plunge - or Dipping Your Toe - into the Fluffy IAM Cloud by Paul Dhanjal (Simeio Solutions)

    - by Greg Jensen
    In our last three posts, we’ve examined the revolution that’s occurring today in identity and access management (IAM). We looked at the business drivers behind the growth of cloud-based IAM, the shortcomings of the old, last-century IAM models, and the new opportunities that federation, identity hubs and other new cloud capabilities can provide by changing the way you interact with everyone who does business with you. In this, our final post in the series, we’ll cover the key things you, the enterprise architect, should keep in mind when considering moving IAM to the cloud. Invariably, what starts the consideration process is a burning business need: a compliance requirement, security vulnerability or belt-tightening edict. Many on the business side view IAM as the “silver bullet” – and for good reason. You can almost always devise a solution using some aspect of IAM. The most critical question to ask first when using IAM to address the business need is, simply: is my solution complete? Typically, “business” is not focused on the big picture. Understandably, they’re focused instead on the need at hand: Can we be HIPAA compliant in 6 months? Can we tighten our new hire, employee transfer and termination processes? What can we do to prevent another password breach? Can we reduce our service center costs by the end of next quarter? The business may not be focused on the complete set of services offered by IAM but rather a single aspect or two. But it is the job – indeed the duty – of the enterprise architect to ensure that all aspects are being met. It’s like remodeling a house but failing to consider the impact on the foundation, the furnace or the zoning or setback requirements. While the homeowners may not be thinking of such things, the architect, of course, must. At Simeio Solutions, the way we ensure that all aspects are being taken into account – to expose any gaps or weaknesses – is to assess our client’s IAM capabilities against a five-step maturity model ranging from “ad hoc” to “optimized.” The model we use is similar to Capability Maturity Model Integration (CMMI) developed by the Software Engineering Institute (SEI) at Carnegie Mellon University. It’s based upon some simple criteria, which can provide a visual representation of how well our clients fair when evaluated against four core categories: ·         Program Governance ·         Access Management (e.g., Single Sign-On) ·         Identity and Access Governance (e.g., Identity Intelligence) ·         Enterprise Security (e.g., DLP and SIEM) Often our clients believe they have a solution with all the bases covered, but the model exposes the gaps or weaknesses. The gaps are ideal opportunities for the cloud to enter into the conversation. The complete process is straightforward: 1.    Look at the big picture, not just the immediate need – what is our roadmap and how does this solution fit? 2.    Determine where you stand with respect to the four core areas – what are the gaps? 3.    Decide how to cover the gaps – what role can the cloud play? Returning to our home remodeling analogy, at some point, if gaps or weaknesses are discovered when evaluating the complete impact of the proposed remodel – if the existing foundation wouldn’t support the new addition, for example – the owners need to decide if it’s time to move to a new house instead of trying to remodel the old one. However, with IAM it’s not an either-or proposition – i.e., either move to the cloud or fix the existing infrastructure. It’s possible to use new cloud technologies just to cover the gaps. Many of our clients start their migration to the cloud this way, dipping in their toe instead of taking the plunge all at once. Because our cloud services offering is based on the Oracle Identity and Access Management Suite, we can offer a tremendous amount of flexibility in this regard. The Oracle platform is not a collection of point solutions, but rather a complete, integrated, best-of-breed suite. Yet it’s not an all-or-nothing proposition. You can choose just the features and capabilities you need using a pay-as-you-go model, incrementally turning on and off services as needed. Better still, all the other capabilities are there, at the ready, whenever you need them. Spooling up these cloud-only services takes just a fraction of the time it would take a typical organization to deploy internally. SLAs in the cloud may be higher than on premise, too. And by using a suite of software that’s complete and integrated, you can dramatically lower cost and complexity. If your in-house solution cannot be migrated to the cloud, you might consider using hardware appliances such as Simeio’s Cloud Interceptor to extend your enterprise out into the network. You might also consider using Expert Managed Services. Cost is usually the key factor – not just development costs but also operational sustainment costs. Talent or resourcing issues often come into play when thinking about sustaining a program. Expert Managed Services such as those we offer at Simeio can address those concerns head on. In a cloud offering, identity and access services lend to the new paradigms described in my previous posts. Most importantly, it allows us all to focus on what we're meant to do – provide value, lower costs and increase security to our respective organizations. It’s that magic “silver bullet” that business knew you had all along. If you’d like to talk more, you can find us at simeiosolutions.com.

    Read the article

  • Unusually high dentry cache usage

    - by Wolfgang Stengel
    Problem A CentOS machine with kernel 2.6.32 and 128 GB physical RAM ran into trouble a few days ago. The responsible system administrator tells me that the PHP-FPM application was not responding to requests in a timely manner anymore due to swapping, and having seen in free that almost no memory was left, he chose to reboot the machine. I know that free memory can be a confusing concept on Linux and a reboot perhaps was the wrong thing to do. However, the mentioned administrator blames the PHP application (which I am responsible for) and refuses to investigate further. What I could find out on my own is this: Before the restart, the free memory (incl. buffers and cache) was only a couple of hundred MB. Before the restart, /proc/meminfo reported a Slab memory usage of around 90 GB (yes, GB). After the restart, the free memory was 119 GB, going down to around 100 GB within an hour, as the PHP-FPM workers (about 600 of them) were coming back to life, each of them showing between 30 and 40 MB in the RES column in top (which has been this way for months and is perfectly reasonable given the nature of the PHP application). There is nothing else in the process list that consumes an unusual or noteworthy amount of RAM. After the restart, Slab memory was around 300 MB If have been monitoring the system ever since, and most notably the Slab memory is increasing in a straight line with a rate of about 5 GB per day. Free memory as reported by free and /proc/meminfo decreases at the same rate. Slab is currently at 46 GB. According to slabtop most of it is used for dentry entries: Free memory: free -m total used free shared buffers cached Mem: 129048 76435 52612 0 144 7675 -/+ buffers/cache: 68615 60432 Swap: 8191 0 8191 Meminfo: cat /proc/meminfo MemTotal: 132145324 kB MemFree: 53620068 kB Buffers: 147760 kB Cached: 8239072 kB SwapCached: 0 kB Active: 20300940 kB Inactive: 6512716 kB Active(anon): 18408460 kB Inactive(anon): 24736 kB Active(file): 1892480 kB Inactive(file): 6487980 kB Unevictable: 8608 kB Mlocked: 8608 kB SwapTotal: 8388600 kB SwapFree: 8388600 kB Dirty: 11416 kB Writeback: 0 kB AnonPages: 18436224 kB Mapped: 94536 kB Shmem: 6364 kB Slab: 46240380 kB SReclaimable: 44561644 kB SUnreclaim: 1678736 kB KernelStack: 9336 kB PageTables: 457516 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 72364108 kB Committed_AS: 22305444 kB VmallocTotal: 34359738367 kB VmallocUsed: 480164 kB VmallocChunk: 34290830848 kB HardwareCorrupted: 0 kB AnonHugePages: 12216320 kB HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 5604 kB DirectMap2M: 2078720 kB DirectMap1G: 132120576 kB Slabtop: slabtop --once Active / Total Objects (% used) : 225920064 / 226193412 (99.9%) Active / Total Slabs (% used) : 11556364 / 11556415 (100.0%) Active / Total Caches (% used) : 110 / 194 (56.7%) Active / Total Size (% used) : 43278793.73K / 43315465.42K (99.9%) Minimum / Average / Maximum Object : 0.02K / 0.19K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 221416340 221416039 3% 0.19K 11070817 20 44283268K dentry 1123443 1122739 99% 0.41K 124827 9 499308K fuse_request 1122320 1122180 99% 0.75K 224464 5 897856K fuse_inode 761539 754272 99% 0.20K 40081 19 160324K vm_area_struct 437858 223259 50% 0.10K 11834 37 47336K buffer_head 353353 347519 98% 0.05K 4589 77 18356K anon_vma_chain 325090 324190 99% 0.06K 5510 59 22040K size-64 146272 145422 99% 0.03K 1306 112 5224K size-32 137625 137614 99% 1.02K 45875 3 183500K nfs_inode_cache 128800 118407 91% 0.04K 1400 92 5600K anon_vma 59101 46853 79% 0.55K 8443 7 33772K radix_tree_node 52620 52009 98% 0.12K 1754 30 7016K size-128 19359 19253 99% 0.14K 717 27 2868K sysfs_dir_cache 10240 7746 75% 0.19K 512 20 2048K filp VFS cache pressure: cat /proc/sys/vm/vfs_cache_pressure 125 Swappiness: cat /proc/sys/vm/swappiness 0 I know that unused memory is wasted memory, so this should not necessarily be a bad thing (especially given that 44 GB are shown as SReclaimable). However, apparently the machine experienced problems nonetheless, and I'm afraid the same will happen again in a few days when Slab surpasses 90 GB. Questions I have these questions: Am I correct in thinking that the Slab memory is always physical RAM, and the number is already subtracted from the MemFree value? Is such a high number of dentry entries normal? The PHP application has access to around 1.5 M files, however most of them are archives and not being accessed at all for regular web traffic. What could be an explanation for the fact that the number of cached inodes is much lower than the number of cached dentries, should they not be related somehow? If the system runs into memory trouble, should the kernel not free some of the dentries automatically? What could be a reason that this does not happen? Is there any way to "look into" the dentry cache to see what all this memory is (i.e. what are the paths that are being cached)? Perhaps this points to some kind of memory leak, symlink loop, or indeed to something the PHP application is doing wrong. The PHP application code as well as all asset files are mounted via GlusterFS network file system, could that have something to do with it? Please keep in mind that I can not investigate as root, only as a regular user, and that the administrator refuses to help. He won't even run the typical echo 2 > /proc/sys/vm/drop_caches test to see if the Slab memory is indeed reclaimable. Any insights into what could be going on and how I can investigate any further would be greatly appreciated. Updates Some further diagnostic information: Mounts: cat /proc/self/mounts rootfs / rootfs rw 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 devtmpfs /dev devtmpfs rw,relatime,size=66063000k,nr_inodes=16515750,mode=755 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /dev/shm tmpfs rw,relatime 0 0 /dev/mapper/sysvg-lv_root / ext4 rw,relatime,barrier=1,data=ordered 0 0 /proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0 /dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0 tmpfs /phptmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 tmpfs /wsdltmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0 cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0 cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0 cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0 cgroup /cgroup/memory cgroup rw,relatime,memory 0 0 cgroup /cgroup/devices cgroup rw,relatime,devices 0 0 cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0 cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0 cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0 /etc/glusterfs/glusterfs-www.vol /var/www fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 /etc/glusterfs/glusterfs-upload.vol /var/upload fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0 172.17.39.78:/www /data/www nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 0 0 Mount info: cat /proc/self/mountinfo 16 21 0:3 / /proc rw,relatime - proc proc rw 17 21 0:0 / /sys rw,relatime - sysfs sysfs rw 18 21 0:5 / /dev rw,relatime - devtmpfs devtmpfs rw,size=66063000k,nr_inodes=16515750,mode=755 19 18 0:11 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000 20 18 0:16 / /dev/shm rw,relatime - tmpfs tmpfs rw 21 1 253:1 / / rw,relatime - ext4 /dev/mapper/sysvg-lv_root rw,barrier=1,data=ordered 22 16 0:15 / /proc/bus/usb rw,relatime - usbfs /proc/bus/usb rw 23 21 8:1 / /boot rw,relatime - ext4 /dev/sda1 rw,barrier=1,data=ordered 24 21 0:17 / /phptmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 25 21 0:18 / /wsdltmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 26 16 0:19 / /proc/sys/fs/binfmt_misc rw,relatime - binfmt_misc none rw 27 21 0:20 / /cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset 28 21 0:21 / /cgroup/cpu rw,relatime - cgroup cgroup rw,cpu 29 21 0:22 / /cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct 30 21 0:23 / /cgroup/memory rw,relatime - cgroup cgroup rw,memory 31 21 0:24 / /cgroup/devices rw,relatime - cgroup cgroup rw,devices 32 21 0:25 / /cgroup/freezer rw,relatime - cgroup cgroup rw,freezer 33 21 0:26 / /cgroup/net_cls rw,relatime - cgroup cgroup rw,net_cls 34 21 0:27 / /cgroup/blkio rw,relatime - cgroup cgroup rw,blkio 35 21 0:28 / /var/www rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-www.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 36 21 0:29 / /var/upload rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-upload.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 37 21 0:30 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs sunrpc rw 39 21 0:31 / /data/www rw,relatime - nfs 172.17.39.78:/www rw,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 GlusterFS config: cat /etc/glusterfs/glusterfs-www.vol volume remote1 type protocol/client option transport-type tcp option remote-host 172.17.39.71 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote2 type protocol/client option transport-type tcp option remote-host 172.17.39.72 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote3 type protocol/client option transport-type tcp option remote-host 172.17.39.73 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote4 type protocol/client option transport-type tcp option remote-host 172.17.39.74 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume replicate1 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote1 remote2 end-volume volume replicate2 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote3 remote4 end-volume volume distribute type cluster/distribute subvolumes replicate1 replicate2 end-volume volume iocache type performance/io-cache option cache-size 8192MB # default is 32MB subvolumes distribute end-volume volume writeback type performance/write-behind option cache-size 1024MB option window-size 1MB subvolumes iocache end-volume ### Add io-threads for parallel requisitions volume iothreads type performance/io-threads option thread-count 64 # default is 16 subvolumes writeback end-volume volume ra type performance/read-ahead option page-size 2MB option page-count 16 option force-atime-update no subvolumes iothreads end-volume

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • Erratic WiFi 2.4 GHz channel spikes, what gives?

    - by Francis W. Usher
    Sorry guys, first a gripe about my neighbor's WiFi access point (it is related): they totally hog the center nine 2.4 GHz channels (3-11), centered right at 7! I know the outer regions of the signal don't make as much of a difference, and technically they're running channels 5 & 9. Anyway, their signal is clearly interfering with mine, which is necessarily centered at 3 or 11 to evade their interference. I guess it's somewhat a case of access point envy: they happen to have both a stronger signal and a higher data rate, while occupying twice the band width that I do. Getting to the point, I've noticed that they tend to sit nice and pretty centered at 7, but they definitely auto-select their channel, and I've noticed that the auto-selection algorithm tends to shift towards the higher channels; hence I decided to pick channel 3, and I don't get so many intermittent lag spikes any more. Anyway, the thing that weirded me out was the reason they have to auto-select sometimes: unexplained, powerful (talking order of 0dB here), giant spikes of 2.4 GHz activity in consistent regions of the spectrum. I don't think it's just noise, since my wireless monitoring software is registering a MAC address, a manufacturer, and usually a fairly coherent ascii name... and it seems to be a fairly well-confined signal. But these signals are fairly common, and they do some weird stuff to my signal. So my question is what are these signals? Where are they coming from? Where are they going? Why are they so ridiculously strong? Why don't they ever last very long? Here's an inSSIDer screenshot I took, for your perusal. I am labeled with "me", my greedy neighbor labeled with "neighbor", and the 2 quasar signals are labeled with "WTF?".

    Read the article

  • converting to MXF using ffmpeg

    - by Prakash
    I have been trying to use FFmpeg utility to convert a avi file using DNxHD to mxf format. I am using "FFmpeg" with params as following: ffmpeg -i ccvt_box.avi -vcodec dnxhd -video_size 1920x1080 -r 24 -b:v 115m ex.mxf The error it is giving : ffmpeg version N-43737-g76c3fff Copyright (c) 2000-2012 the FFmpeg developers built on Aug 20 2012 18:50:42 with llvm-gcc 4.2.1 (LLVM build 2336.11.00) configuration: libavutil 51. 70.100 / 51. 70.100 libavcodec 54. 53.100 / 54. 53.100 libavformat 54. 25.104 / 54. 25.104 libavdevice 54. 2.100 / 54. 2.100 libavfilter 3. 11.101 / 3. 11.101 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 Input #0, avi, from 'ccvt_box.avi': Duration: 00:00:10.00, start: 0.000000, bitrate: 691 kb/s Stream #0:0: Video: indeo5 (IV50 / 0x30355649), yuv410p, 340x344, 10 tbr, 10 tbn, 10 tbc Metadata: title : bob.avi [dnxhd @ 0x7fcd60818e00] video parameters incompatible with DNxHD Output #0, mxf, to 'ex.mxf': Stream #0:0: Video: dnxhd, yuv422p, 340x344, q=2-1024, 90k tbn, 24 tbc Metadata: title : bob.avi Stream mapping: Stream #0:0 -> #0:0 (indeo5 -> dnxhd) Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height

    Read the article

  • Why is MySQL table_cache full but never used

    - by Jeremy Clarke
    I have been using the tuning-primer.sh script to tune my my.cnf settings. I have most things working well but the part about TABLE CACHE makes no sense: TABLE CACHE Current table_cache value = 900 tables. You have a total of 0 tables You have 900 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use. You should probably increase your table_cache When I do SHOW STATUS; I get the following table-related numbers: Open_tables = 900 Opened_tables = 0 It seems like something is going wrong. I have some extra memory I could use on increasing the table_cache size, but my sense is that the 900 tables already available aren't doing anything, and increasing it will just waste more energy. Why might this be happening? Are there other settings that could cause all my table_cache slots to be used even though there are no hits to them? I have 150 max connections and probably no more than 4 tables per join, FWIW. Here is the tuner script output for temp tables, which I've also been tuning: TEMP TABLES Current max_heap_table_size = 90 M Current tmp_table_size = 90 M Of 11032358 temp tables, 40% were created on disk Perhaps you should increase your tmp_table_size and/or max_heap_table_size to reduce the number of disk-based temporary tables. Note! BLOB and TEXT columns are not allow in memory tables. If you are using these columns raising these values might not impact your ratio of on disk temp tables.

    Read the article

  • Are there compact external USB audio interfaces which are better than a on-board sound?

    - by rumtscho
    I am asking this for a friend. He loves his voice recognition software and dictates a lot of text using a headset. Now he has a new laptop, which only has a combined mic/headphones output, and wanted to buy an adapter. I told him to get an external USB sound interface instead, as the better sound quality will probably increase the hit rate of the voice recognition. He agreed, but when he saw a picture of the SoundBlaster X-Fi, he said that it is way too big, because he wants to carry the thing everywhere. He'd rather have one of these small things which are the size of a flash memory stick, with only one mic and one phones output, period. Now I am not sure whether these mini interfaces would produce a sound better than onboard sound. They all seem to come not from established audio interface manufacturers, but from electronic accessories manufacturers like Speedlink, or just noname brands. Is there a compact audio interface with good A/D quality (it is OK if the price is comparable to that of the bigger interfaces, even if there is no additional functionality like Chinch in-/output etc)?. And if there isn't, will the noname soundcardsticks offer any advantage over a simple adaptor for the onboard sound?

    Read the article

  • Reliable custom Windows shortcut keys?

    - by Peter Baer
    I have global Windows shortcut keys assigned to several different cmd.exe instances. I do this by creating shortcuts to cmd.exe on my desktop, and assigning each one a unique shortcut key (for example, CTRL + SHIFT + U). Pretty basic stuff. I'm using Win2K8 (R1 and R2). This works just fine... most of the time. But with infuriating regularity, sometimes it doesn't. Or it will work with a long delay (many seconds). It doesn't matter what app currently has focus (it can even be one of the command prompts). It doesn't matter what keys I assign (I've tried a few variations of WIN, CTRL and SHIFT). I did notice that this is often, but not always, correlated with explorer.exe struggling in some way or another (say, an explorer window opened to a file share that's unavailable, or an app being unresponsive, or whatever). In other words the shortcut key handling appears to be very sensitive to unrelated system activity. Note that whenever I have this problem I can always successfully ALT + TAB to the window I want to get to, but that's tedious. I use the shortcuts to these command windows hundreds of times a day so even a 1% failure rate becomes really annoying. Is there a way to fix this, or is there some third-party utility out there that will RELIABLY intercept custom key combinations to bring focus to whatever apps I want, in a way that is independent of other system activity? ADDENDUM: There is a property of the Windows shortcuts that I would not want to lose if switching to a third-party hotkey tool: Windows shortcuts are idempotent. Once you've launched a shortcut to some app, pressing the shortcut key combo again takes you to the already launched process - it does not launch a new process.

    Read the article

  • Why Executives Need Enterprise Project Portfolio Management: 3 Key Considerations to Drive Value Across the Organization

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} By: Guy Barlow, Oracle Primavera Industry Strategy Director Over the last few years there has been a tremendous shift – some would say tectonic in nature – that has brought project management to the forefront of executive attention. Many factors have been driving this growing awareness, most notably, the global financial crisis, heightened regulatory environments and a need to more effectively operationalize corporate strategy. Executives in India are no exception. In fact, given the phenomenal rate of progress of the country, top of mind for all executives (whether in finance, operations, IT, etc.) is the need to build capacity, ramp-up production and ensure that the right resources are in place to capture growth opportunities. This applies across all industries from asset-intensive – like oil & gas, utilities and mining – to traditional manufacturing and the public sector, including services-based sectors such as the financial, telecom and life sciences segments are also part of the mix. However, compounding matters is a complex, interplay between projects – big and small, complex and simple – as companies expand and grow both domestically and internationally. So, having a standardized, enterprise wide solution for project portfolio management is natural. Failing to do so is akin to having two ERP systems, one to manage “large” invoices and one to manage “small” invoices. It makes no sense and provides no enterprise wide visibility. Therefore, it is imperative for executives to understand the full range of their business commitments, the benefit to the company, current performance and associated course corrections if needed. Irrespective of industry and regardless of the use case (e.g., building a power plant, launching a new financial service or developing a new automobile) company leaders need to approach the value of enterprise project portfolio management via 3 critical areas: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} 1. Greater Financial Discipline – Improve financial rigor and results through better governance and control is an imperative given today’s financial uncertainty and greater investment scrutiny. For example, as India plans a US$1 trillion investment in the country’s infrastructure how do companies ensure costs are managed? How do you control cash flow? Can you easily report this to stakeholders? 2. Improved Operational Excellence – Increase efficiency and reduce costs through robust collaboration and integration. Upwards of 66% of cost variances are driven by poor supplier collaboration. As you execute initiatives do you have visibility into the performance of your supply base? How are they integrated into the broader program plan? 3. Enhanced Risk Mitigation – Manage and react to uncertainty through improved transparency and contingency planning. What happens if you’re faced with a skills shortage? How do you plan and account for geo-political or weather related events? In summary, projects are not just the delivery of a product or service to a customer inside a predetermined schedule; they often form a contractual and even moral obligation to shareholders and stakeholders alike. Hence the intimate connection between executives and projects, with the latter providing executives with the platform to demonstrate that their organization has the capabilities and competencies needed to meet and, whenever possible, exceed their customer commitments. Effectively developing and operationalizing corporate strategy is the hallmark of successful executives and enterprise project and portfolio management allows them to achieve this goal. Article was first published for Manage India, an e-newsletter, PMI India.

    Read the article

  • Chrome pop-up blocker fail?

    - by Count Zero
    I have this problem since just a few days. I've been a heavy Chrome user for several years, but it never happened before: Sometimes absolutely uncalled for pop-ups appear, when I click something absolutely legit. It seems to happen at a very precise rate of two pop-ups every hour or so. The pop-ups are not very varied, it seems to be a fixed set of some 4-5 ads. At first I thought I caught some malware, but after a full scan with Malwarebytes Anti-Malware, I found absolutely nothing. In addition the problem exists only with Chrome. When I use Firefox, this never ever happens. What is even more puzzling is that it happens on both of my rigs and started on the same day. I have a removable storage that I dock to both of them and access one machine from the other via RDP, but still... It seems this is a problem that others face too. See this Google forum and here. Could it be that the latest Chrome build is just acting up? (Just in case it matters: I run Chrome version 24.0.1312.57 m on Win 7.)

    Read the article

  • EC2 configuration for medium load service on Django

    - by Luberg
    I have created a very basic Django application which puts an email to the database (Coming soon page for a startup). I launched a t1.micro instance to try out which load it can carry out. Nginx+FastCGI from Django+sqllite/postgres - tried both. blitz.io test gave me a pretty unhappy result (just 100 users within 1 minute): This rush generated 542 successful hits in 1.0 min and we transferred 809.01 KB of data in and out of your app. The average hit rate of 8.81/second translates to about 761,612 hits/day. You got bigger problems though: 87.28% of the users during this rush experienced timeouts or errors! I tried both to put varnish, disabled Debub mode in django and started fastcgi in threaded mode - nothing helps. This is not gonna be a super highload page - just a coming soon page to save email of subscribers, it should carry at least 500-1000 users at the same time in peak... I believe t1.micro is super small for that, but I also have tried small instance - not better result.. Please let me know should I use something different from Amazon EC2, or to pick smth better than t1.micro, or I that is definetely a configuration issues?...

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • Identity Globe Trotters (Sep Edition): The Social Customer

    - by Tanu Sood
    Welcome to the inaugural edition of our monthly series - Identity Globe Trotters. Starting today, the last Friday of every month, we will explore regional commentary on Identity Management. We will invite guest contributors from around the world to share their opinions and experiences around Identity Management and highlight regional nuances, specific drivers, solutions and more. Today's feature is contributed by Michael Krebs, Head of Business Development at esentri consulting GmbH, a (SOA) specialized Oracle Gold Partner based in Ettlingen, Germany. In his current role, Krebs is dealing with the latest developments in Enterprise Social Networking and the Integration of Social Media within business processes.  By Michael Krebs The relevance of "easy sign-on" in the age of the "Social Customer" With the growth of Social Networks, the time people spend within those closed "eco-systems" is growing year by year. With social networks looking to integrate search engines, like Facebook announced some weeks ago, their relevance will continue to grow in contrast to the more conventional search engines. This is one of the reasons why social network accounts of the users are getting more and more like a virtual fingerprint. With the growing relevance of social networks the importance of a simple way for customers to get in touch with say, customer care or contract departments, will be crucial for sales processes in critical markets. Customers want to have one single point of contact and also an easy "login-method" with no dedicated usernames, passwords or proprietary accounts. The golden rule in the future social media driven markets will be: The lower the complexity of the initial contact, the better a company can profit from social networks. If you, for example, can generate a smart way of how an existing customer can use self-service portals, the cost in providing phone support can be lowered significantly. Recruiting and Hiring of "Digital Natives" Another particular example is "social" recruiting processes. The so called "digital natives" don´t want to type in their profile facts and CV´s in proprietary systems. Why not use the actual LinkedIn profile? In German speaking region, the market in the area of professional social networks is dominated by XING, the equivalent to LinkedIn. A few weeks back, this network also opened up their interfaces for integrating social sign-ons or the usage of profile data for recruiting-purposes. In the European (and especially the German) employment market, where the number of young candidates is shrinking because of the low birth rate in the region, it will become essential to use social-media supported hiring processes to find and on-board the rare talents. In fact, you will see traditional recruiting websites integrated with social hiring to attract the best talents in the market, where the pool of potential candidates has decreased dramatically over the years. Identity Management as a key factor in the Customer Experience process To create the biggest value for customers and also future employees, companies need to connect their HCM or CRM-systems with powerful Identity management solutions. With the highly efficient Oracle (social & mobile enabling) Identity Management solution, enterprises can combine easy sign on with secure connections to the backend infrastructure. This combination enables a "one-stop" service with personalized content for customers and talents. In addition, companies can collect valuable data for the enrichment of their CRM-data. The goal is to enrich the so called "Customer Experience" via all available customer channels and contact points. Those systems have already gained importance in the B2C-markets and will gradually spread out to B2B-channels in the near future. Conclusion: Central and "Social" Identity management is key to Customer Experience Management and Talent Management For a seamless delivery of "Customer Experience Management" and a modern way of recruiting the best talent, companies need to integrate Social Sign-on capabilities with modern CX - and Talent management infrastructure. This lowers the barrier for existing and future customers or employees to get in touch with sales, support or human resources. Identity management is the technology enabler and backbone for a modern Customer Experience Infrastructure. Oracle Identity management solutions provide the opportunity to secure Social Applications and connect them with modern CX-solutions. At the end, companies benefit from "best of breed" processes and solutions for enriching customer experience without compromising security. About esentri: esentri is a provider of enterprise social networking and brings the benefits of social network communication into business environments. As one key strength, esentri uses Oracle Identity Management solutions for delivering Social and Mobile access for Oracle’s CRM- and HCM-solutions. …..End Guest Post…. With new and enhanced features optimized to secure the new digital experience, the recently announced Oracle Identity Management 11g Release 2 enables organizations to securely embrace cloud, mobile and social infrastructures and reach new user communities to help further expand and develop their businesses. Additional Resources: Oracle Identity Management 11gR2 release Oracle Identity Management website Datasheet: Mobile and Social Access (pdf) IDM at OOW: Focus on Identity Management Facebook: OracleIDM Twitter: OracleIDM We look forward to your feedback on this post and welcome your suggestions for topics to cover in Identity Globe Trotters. Last Friday, every month!

    Read the article

  • High speed network configuration

    - by Peter M
    Sorry if this seems to be a stupid question, I'm not sure how to specify what I want to know when checking google. I will have 2 or 3 devices pumping out data on a 100Base-T port. The combined data rate of all devices is about 15KB/S which exceeds the optimal 100Base-T channel capacity (12KB/S), but well within the realms of a 1000Base-T connection. Each device will be sending a burst of data in the form of an FTP transfer to a common, single host computer in a sequential manner ie: Device A establishes FTP connection and transfers data Device B establishes FTP connection and transfers data Device C establishes FTP connection and transfers data It may be that the A&B, B&C and C&A transfers overlap in the time domain to some extent. There will be minimal traffic going back from the computer to each device (in general what ever is needed to support the FTP transfers), and the network will be dedicated to transferring data between these devices and the host computer. Is it possible to use a switch to combine the multiple incoming 100Base-T streams into a single outgoing 1000Base-T stream? if so what features in a switch should I be looking for? Or would it be better to have 3 physical point-to-point 100Base-T dedicated connections between each device and the host computer? (thus having at least 3 physical Ethernet interfaces on that computer) Note that I can't change the interface on the devices, but I am free to choose the network and host computer configuration. Thanks for you help Peter

    Read the article

  • Linux: prevent outgoing TCP flood

    - by Willem
    I run several hundred webservers behind loadbalancers, hosting many different sites with a plethora of applications (of which I have no control). About once every month, one of the sites gets hacked and a flood script is uploaded to attack some bank or political institution. In the past, these were always UDP floods which were effectively resolved by blocking outgoing UDP traffic on the individual webserver. Yesterday they started flooding a large US bank from our servers using many TCP connections to port 80. As these type of connections are perfectly valid for our applications, just blocking them is not an acceptable solution. I am considering the following alternatives. Which one would you recommend? Have you implemented these, and how? Limit on the webserver (iptables) outgoing TCP packets with source port != 80 Same but with queueing (tc) Rate limit outgoing traffic per user per server. Quite an administrative burden, as there are potentially 1000's of different users per application server. Maybe this: how can I limit per user bandwidth? Anything else? Naturally, I'm also looking into ways to minimize the chance of hackers getting into one of our hosted sites, but as that mechanism will never be 100% waterproof, I want to severely limit the impact of an intrusion. Cheers!

    Read the article

  • Apache https is slsow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With http (KeepAlive off), I can get over 5000 requests per second. However, with https, I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • C# sends SQL data 4 times less from one box than from another

    - by Bobb
    W2003, .NET 3.5, SQL 2008 I have prod and UAT app servers deployed in 2 different data centres. I have a C# app which reads text file, parse the text and sends the data to the SQL in bulk. SQL server is in US and the app servers are in London (but in different places). All POPs have dedicated network connections. There is no public internet involved. When the app runs on UAT server I can see in Perfmon that the Send byte/sec is x4 higher than from production server. My estimate is that one server outputs at 1 MB/s and the other at 250 KB/s rate. My suspicion immediately is that there is a router on one of the DCs which shapes traffic or does QoS limitation on traffice from London to US. However support and Windows team and networkig team all are saying that there are no differences in neither networking config on the 2 DCs nor NIC config on the 2 app server... How to find out why is the networking bottlneck is 4 times tighter in one place than in the other? What can I do about it?

    Read the article

  • VirtualBox management interface unreliability

    - by Arlen Cuss
    I'm using VirtualBox 3.2.8_OSE with 20 VMs running, and everything's going fine. I find that if I hammer the VBoxManage interface, all sorts of interesting things happen, usually necessitating either a restart of the VM in question, or of all VMs. For instance, if I use VBoxManage guestcontrol execute to run processes, after a few hours of using it maybe once or twice a minute on any given VM, it'll mysteriously start reporting VERR_NOT_IMPLEMENTED and refusing to do anything—sometimes trying to restart /usr/sbin/VBoxService on the VM itself will get it back in working order, but often it won't, and in the meantime, no data can be collected using VBoxManage. Such data includes the VM's IP, so if I hadn't recorded it earlier, I'm usually in trouble and have no option but to portscan the network for it, or kill the VM's process on the host manually and restart it. This one I haven't narrowed down yet, but it seems that even using VBoxManage guestproperty get (to retrieve a machine's IP) frequently and rapidly is enough to cause all VMs' management interfaces to die. The processes are still running fine, but VBoxManage reports them all as "powered off". In the meantime, another process somewhere in the system seems to have decided that their being powered off means they need to be powered on again, and suddenly I have 2x the number of VBoxHeadless processes running than I used to. Has anyone else seen behaviour like this? Is there any workaround? This is a serious impediment to my work, as I've had to resort to a lot of (hacky) caching of data and rate-limiting how often I call VBoxManage, just in case I accidentally bring 20 VMs to their knees.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >