Search Results

Search found 2088 results on 84 pages for 'jobs'.

Page 29/84 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Site Web Analytics not updating Sharepoint 2010

    - by Rohit Gupta
    If you facing the issue that the web Analytics Reports in SharePoint 2010 Central Administration is not updating data. When you go to your site > site settings > Site Web Analytics reports or Site Collection Analytics reports  You get old data as in the ribbon displayed "Data Last Updated: 12/13/2010 2:00:20 AM" Please insure that the following things are covered: Insure that Usage and Data Health Data Collection service is configured correctly. Log Collection Schedule is configured correctly Microsoft Sharepoint Foundation Usage Data Import and Microsoft SharePoint Foundation Usage Data Processing Timer jobs are configured to run at regular intervals One last important Timer job is the Web Analytics Trigger Workflows Timer Job insure that this timer job is enabled and scheduled to run at regular intervals (for each site that you need analytics for). After you have insured that the web analytics service configuration is working fine and the Usage Data Import job is importing the *.usage files from the ULS LOGS folder into the WSS_Logging database, and that all the required timer jobs are running as expected… wait for a day for the report to get updated… the report gets updated automatically at 2:00 am in the morning… and i could not find a way to control the schedule for this report update job. So be sure to wait for a day before giving up :)

    Read the article

  • How do I prevent my filesystems from being mounted read-only after suspending?

    - by Chas. Owens
    I have Ubuntu 12.04 installed on an SDHC card (only one ext2 partition, no swap). When I suspend using pm-suspend, my root filesystem is mounted read-only. I am currently "fixing" this with the following file: /etc/pm/sleep.d/99_make_disk_rw: #!/bin/sh mount -o remount,rw / But the disk is marked as needing an fsck on reboot. How can I prevent the filesystem from being mounted read-only or whatever is going wrong here. Update: It looks like it is getting mounted read-only because an error occurred. I have changed the mount options for / in /etc/fstab to noatime,nodiratime,errors=continue and it no longer mounts the SDHC card as read-only after it resumes. So the problem is happening when it suspends, not when it resumes as I had thought. I checked /sys/bus/usb/devices/1-4/power/persist and it is set to 1. So the SDHC card shouldn't appear disconnected to the OS (or more accurately it should recover from the disconnection without error). Here seems to be the relevant section of the syslog Sep 10 10:34:23 iubit kernel: [ 748.246226] sd 4:0:0:0: [sdb] Media Changed Sep 10 10:34:23 iubit kernel: [ 748.246234] sd 4:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Sep 10 10:34:23 iubit kernel: [ 748.246243] sd 4:0:0:0: [sdb] Sense Key : Unit Attention [current] Sep 10 10:34:23 iubit kernel: [ 748.246253] Info fld=0x0 Sep 10 10:34:23 iubit kernel: [ 748.246258] sd 4:0:0:0: [sdb] Add. Sense: Not ready to ready change, medium may have changed Sep 10 10:34:23 iubit kernel: [ 748.246271] sd 4:0:0:0: [sdb] CDB: Read(10): 28 00 00 5d 3e f0 00 00 08 00 Sep 10 10:34:23 iubit kernel: [ 748.246291] end_request: I/O error, dev sdb, sector 6110960 Sep 10 10:34:23 iubit kernel: [ 748.247027] EXT2-fs (sdb1): error: ext2_fsync: detected IO error when writing metadata buffers Sep 10 10:34:23 iubit anacron[6954]: Anacron 2.3 started on 2012-09-10 Sep 10 10:34:23 iubit anacron[6954]: Normal exit (0 jobs run) Sep 10 10:34:24 iubit laptop-mode: Laptop mode Sep 10 10:34:24 iubit laptop-mode: enabled, not active Sep 10 10:34:24 iubit kernel: [ 749.055376] sd 4:0:0:0: [sdb] No Caching mode page present Sep 10 10:34:24 iubit kernel: [ 749.055387] sd 4:0:0:0: [sdb] Assuming drive cache: write through Sep 10 10:34:25 iubit anacron[7555]: Anacron 2.3 started on 2012-09-10 Sep 10 10:34:25 iubit anacron[7555]: Normal exit (0 jobs run) Sep 10 10:34:31 iubit kernel: [ 756.090861] EXT2-fs (sdb1): previous I/O error to superblock detected

    Read the article

  • about freelancer in third world countries

    - by MaKo
    hello guys, one question that is been bogging me... first of all I want to say that I actually come from a third world country, so I am all up for opportunities for everybody... so here comes my consideration,,, I have been working as a programmer for Iphone apps (noob in the company), now in my new "first" world country (immigration can be good!!!), but seem to be getting more and more advertisement from sites like freelancer.com etc,,, so I would want to know what do you think about all this???, would the jobs be getting cheaper?? if a project can be done by say 10% of the cost overseas, what is stopping the employers of doing just that? is it worth it? how about the quality? from a local job and overseas job? and all other aspects I cannot think about?? I just want to know if all this years of learning are going to pay off? or if in a near future all programming jobs will just go to cheaper labor? (sweat shops??) ok hope to make sense in my ramblings,, cheers;)

    Read the article

  • Feature Updates to the Windows Azure Portal

    - by Clint Edmonson
    Lots of activity over at the Windows Azure portal this weekend, including some exciting new features and major improvements to existing features. Here are the highlights: Support for Managing Co-administrators Set up account co-administrators to allow others to share service management duties for each Azure subscription Import/Export support for SQL Databases Export existing SQL Azure databases to blob storage using SQL Server 2012’s BACPAC format. Create a new SQL Azure database from an existing BACPAC stored in blob storage Storage Container Management and Access Control Create blob storage containers directly within the portal Edit their public/private access settings Drill into storage containers and see the blobs contained within them Improved Cloud Service Status Notifications Detailed health status information about cloud services and roles as they transition between states Virtual Machine Experience Enhancements Option to automatically delete corresponding VHD files from blob storage when deleting VM disks Service Bus Management and Monitoring Ability to create and manage service bus Namespaces, Queues, Topics, Relays and Subscriptions Rich monitoring of Topics, Queues, and Subscriptions with detailed and customizable dashboard metrics Entity status (Topic, Queue, or Subscription) can be changed interactively via dashboard Direct links to the Access Control Services (ACS) namespaces when working with service bus access keys Media Services Monitoring Support Monitor encoding jobs that are queued for processing as well as active, failed and queued tasks for encoding jobs The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted Reference ID: P7VVJCM38V8R

    Read the article

  • Why is CS never a topic of conversation of the layman? [closed]

    - by hydroparadise
    Granted, every profession has it's technicalities. If you are an MD, you better know the anatomy of the human body, and if you are astronomer, you better know your calculus. Yet, you don't have to know these more advance topics to know that smoking might give you lung cancer because of carcinogens or the moon revolves around the earth because of gravity (thank you Discovery Channel). There's sort of a common knowledge (at least in more developed countries) of these more advanced topics. With that said, why are things like recursive descent parsing, BNF, or Turing machines hardly ever mentioned outsided 3000 or 4000 level classes in a university setting or between colleagues? Even back in my days before college in my pursuit of knowledge on how computers work, these very important topics (IMHO) never seem to get the light of day. Many different sources and sites go into "What is a processor?" or "What is RAM?", or "What is an OS?". You might get lucky and discover something about programming languages and how they play a role in how applications are created, but nothing about the tools for creating the language itself. To extend this idea, Dennis Ritchie died shortly after Steve Jobs, yet Dennis Ritchie got very little press compared to Steve Jobs. So, the heart of my question: Does the public in general not care to hear about computer science topics that make the technology in their lives work, or does the computer science community not lend itself to the general public to close the knowledge gap? Am I wrong to think the general public has the same thirst for knowledge on how things work as I do? Please consider the question carefully before answering or vote closing please.

    Read the article

  • Trying to move away from PHP/Yii: RoR, Python/Django or ASP.NET MVC? Your opinions please [closed]

    - by Örs
    I have a CS degree and I've been working as a web developer (front & backend) for about 2 years now. I've been working with PHP mostly because it was easy to pick up and find a job, but I've grown to dislike the language and want to try something new, and possibly get a better paying job. That last point is especially important because in my area (Romania/Eastern Europe) PHP jobs are mostly for people fresh out of college/high school, hence the pay is rather low. I've been working with the Yii framework which, if I understand correctly, borrows a lot from Ruby on Rails (convention over configuration, MVC, Active Record, scaffolding). Other than PHP I only know curly-brace languages (C/C++/Java) and bash so Python/Ruby might be a bit challenging. On the other hand I've been using Linux (with vim and recently Sublime Text 2) for almost 4 years now so Windows and a lack of a terminal would have its downsides as well. I'm leaning towards Python/Ruby because of my *nix bias (plus both look like fun), but I've heard great things about ASP.NET MVC as well. Any suggestions? PS: I think there are more jobs in ASP.NET around here, but that's not necessarily a plus, because there are a lot of CS graduates as well. tl;dr: Romanian PHP/Yii developer trying to move to Python/Django or Ruby/Rails or C#/ASP.NET MVC. Suggestions?

    Read the article

  • Employee Engagement: Drive Business Value

    - by Kellsey Ruppel
    As we’ve been discussing this week, employee engagement is extremely important and you’ve probably realized that effectively engaging your employees is essential to driving business value. Your employees are the ones responsible for executing on the business’ objectives. Your employees (in the sales & service departments) are the ones interacting with your customers the most, so delivering on customer expectations and attaining high levels of customer engagement are simply not possible without successfully empowering these this stakeholder group. High employee and partner engagement can have many benefits including: Higher levels of employee productivity Longer employee retention Stronger, more enduring and more successful relationships Serving as ambassadors for an organization’s brand More likely to deliver excellent customer service Referring others for hire Recommending the organization’s products and services Sharing feedback with their colleagues In a way, engagement is a measure of employee investment in an organization’s mission and brand. And then you have the enablement piece of this as well.  It’s hard to imagine a high level of engagement existing among employees who don’t feel that they’ve been enabled to do their jobs very efficiently or effectively. You’re just not going to find high engagement among people if the everyday processes and technologies  they work with make it a challenge for them to access, share and manage the information  they need do their jobs or if they’re unable to effectively collaborate around the projects they’re working on. How does your organization measure on the employee engagement spectrum? We’ve got a number of different resources to help you get started! Portal Resource Center Video: Got a minute? WebCenter in Action Webcast Series Portal Engagement Webcast 

    Read the article

  • Advice on designing web application with a 40+ year lifetime

    - by user2708395
    Scenario Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are: Workflow management of forms Schedule management of forms Security/Role based management Reporting engine Mobile/Tablet support Situation Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data and doesn't seem to believe in re-factoring. Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system. I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch. This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting. Questions What design considerations will make the system more "future proof"? What experiences have you had in designing such systems - both failures and successes? What questions should be asked to the client/PM to make the system more "future proof"?

    Read the article

  • EDQ Technical Enablement for OPN (Prague - June 17-19)

    - by milomir.vojvodic
    Oracle Enterprise Data Quality (EDQ) Technical Enablement and Partner Training Trusted Data for Your Enterprise Applications Oracle Enterprise Data Quality helps organizations achieve maximum value from their business-critical applications by delivering fit-for-purpose data. These products also enable individuals and collaborative teams to quickly and easily identify and resolve any problems in underlying data. With Oracle Enterprise Data Quality, customers can identify new opportunities, improve operational efficiency, and more efficiently comply with industry or governmental regulation. Oracle Enterprise Data Quality is designed to serve as a very channel friendly platform to OPN.  This means that pre-built extensions, components and even complete business solutions can readily be built and shared.  This allows our customers/partners to be highly efficient in how they deploy custom business solutions, but also allows our partners to develop specialized components, domain knowledge and even complete business solutions. Training is suitable for: · Database administrators · Architects · Technical staff Objectives of the training: After completing this course, participants should: · Have an understanding of the core functionality of EDQ across profiling, auditing, transforming, parsing and matching data · Be able to describe some of the key capabilities and benefits delivered by EDQ · Be able to create and run standalone EDQ processes and jobs · Be ready to start working with data from customers and (with practice) be able to demonstrate EDQ to customers Agenda 17th June Fundamentals For Demoing (Profile, Audit, Transform and More) Profiling Auditing Transforming Writing and exporting data Jobs and scheduling Publishing, packaging and copying EDQ processes Introduction to the Customer Data Extension Pack Realtime Processing via Web Services The Server Console Run Profiles Data Interfaces Sampling Publishing metrics to the Dashboard Users and security 18th June Matching Matching overview Basic matching configuration Matching rule hierarchies Clustering Merging Reviewing possible matches Outputting Match Data Case study 19th June Address Verification Address Verification Overview Configuration Accuracy Flags Parsing Parsing Overview Phrase profiling Tailoring a CDEP Parser Base Tokenization Classification Reclassification Selection Resolution Register Here Don’t miss this FREE event. Space is limited. Oracle University V Parku 2294/4 148 00 Praha 4 17.6. – 19.6. 2014 09:00 a.m.– 17:30 p.m.

    Read the article

  • What are some good tips for a developer trying to design a scalable MySQL database?

    - by CFL_Jeff
    As the question states, I am a developer, not a DBA. I have experience with designing good ER schemas and am fairly knowledgeable about normalization and good schema design. I have also worked with data warehouses that use dimensional modeling with fact tables and dim tables. However, all of the database-driven applications I've developed at previous jobs have been internal applications on the company's intranet, never receiving "real-world traffic". Furthermore, at previous jobs, I have always had a DBA or someone who knew much more than me about these things. At this new job I just started, I've been asked to develop a public-facing application with a MySQL backend and the data stored by this application is expected to grow very rapidly. Oh, and we don't have a DBA. Well, I guess I am the DBA. ;) As far as designing a database to be scalable, I don't even know where to start. Does anyone have any good tips or know of any good educational materials for a developer who has been sort of shoved into a DBA/database designer role and has been tasked with designing a scalable database to support an application like this? Have any other developers been through this sort of thing? What did you do to quickly become good at this role? I've found some good slides on the subject here but it's hard to glean details from slides. Wish I could've attended that guy's talk. I also found a good blog entry called 5 Ways to Boost MySQL Scalability which had some good information, though some of it was over my head. tl;dr I just want to make sure the database doesn't have to be completely redesigned when it scales up, and I'm looking for tips to get it right the first time. The answer I'm looking for is a "list of things every developer should know about making a scalable MySQL database so your application doesn't perform like crap when the data gets huge".

    Read the article

  • Do I expect too much work from an employer? [closed]

    - by Ant
    I recently switched jobs because I was not challenged enough, the work would come in waves, and I HATED the people I worked with. I am a recent college grad, May 2009, and based off the 3 internships I had, and 2 full time jobs I obtained, I am finding that employers can not keep me satisfied with the amount of work. At my new job, I like the people I work with, I am challenged, but I still do not get enough work. I hate down time. I always want to have something to work on AT LEAST 6 out of the 8 hours. I was surprised that my new employer actually hired me because the majority of the technologies they implement, I had minimal exposure to. I never programmed in the technologies they use outside of one class in college. My greatest strength is that I am an extremely fast learner. I can pick up new technologies with relative ease. They gave me a project to work on by myself and I think they assumed it would take me longer to complete. Now that I finished that app, they are struggling to find something for me to do. I am not sure if it is bad timing being close to the holidays, my manager dealing with personal issues at home, how quickly I finished the first project, or that I expect too much out of an employer? If so, what are good things to do on all this downtime?! EDIT: Thanks for all the feedback! EDIT 2: I am going to "unaccept" the answer in an effort to keep the question open. As a few people have mentioned, this is a great discussion on how to grow as a new worker in the programming field. EDIT 3: I am attempting to revive this question so the moderators will see the support to re-open it.

    Read the article

  • The Real Value Of Certification

    - by Brandye Barrington
    I read a quote recently by Rich Hein of CIO.com "Certifications are, like most things in life: The more you put into them, the more you will get out." This is what we tell candidates all the time. The real value in obtaining a certification is the time spent preparing for the exam. All the hours spent reading books, practicing in hands-on environments, asking questions and searching for answers is valuable. It's valuable preparation for the exam, but it's also valuable preparation for your future job role and for your career. If your goal is just to pass an exam, you've missed a very important part of the value of certification.We receive so many questions through different forms of social media on whether or not certification will help candidates get jobs or get better jobs. Surveys conducted by us and by independent entities all point to the job and salary benefits of certification. However, a key part of that equation is whether a candidate can actually perform successfully in a job role. If preparation time was used to practice and learn and master new skills rather than to memorize a brain dump, the candidate will probably perform successfully in their job role, and job opportunities and higher salary will likely follow. Candidates who do not show that initiative, will not likely reap the full benefits of certification.Keep this in mind as you approach your next certification exam. You are preparing for a career, not an exam. This may help you to be more appreciative of the long hours spent studying!

    Read the article

  • I am confused between PHP and ASP.NET to choose as a career in Indian software development context

    - by Confused_Guy
    I need your help(specially from the software professionals of India). I have completed MCA in 2009. After that instead of joining software company I did a Teaching job nearby by home. In the mean time I prepared myself for public sector jobs(bank). I continued my job for 1 year more and left it in 2010. Now in 2012 ,I feel that I should have done the software jobs,so that I could earn my bread and butter and in the mean time I could have prepared for the job.Because,according to my qualification it will give me the best salary. Now I want to go back in software industries. Now all of them are asking for experiences.And I don't have any.....So which language should I learn? And what should I do,because I have two year gap. Some of my friends suggested me to go with PHP as its easier and quicker to get job in India. But Here the PHP guys are getting less salary as compared to ASP.NET. I am planning to begin with PHP and but is it possible to switch to ASP.NET after two years experience. JAVA: I know upto servlet & JSP. Which is nothing in current market. ASP.NET: I know the basics of asp.net upto database connection ie(Gridview). PHP: Only the basics. So what should I do now. Which is most demanding. Does PHP is good, I feel its more like JSP pages. Please guide me, All your suggestions are needed for me.

    Read the article

  • why are transaction monitors on decline? or are they?

    - by mrkafk
    http://www.itjobswatch.co.uk/jobs/uk/cics.do http://www.itjobswatch.co.uk/jobs/uk/tuxedo.do Look at the demand for programmers (% of job ads that the keyword appears), first graph under the table. It seems like demand for CICS, Tuxedo has fallen from 2.5%/1% respectively to almost zero. To me, it seems bizarre: now we have more networked and internet enabled machines than ever before. And most of them are talking to some kind of database. So it would seem that use of products whose developers spent last 20-30 years working on distributing and coordinating and optimizing transactions should be on the rise. And it appears they're not. I can see a few causes but can't tell whether they are true: we forgot that concurrency and distribution are really hard, and redoing it all by ourselves, in Java, badly. Erlang killed them all. Projects nowadays have changed character, like most business software has already been built and we're all doing internet services, using stuff like Node.js, Erlang, Haskell. (I've used RabbitMQ which is written in Erlang, "but it was small specialized side project" kind of thing). BigData is the emphasis now and BigData doesn't need transactions very much (?). None of those explanations seem particularly convincing to me, which is why I'm looking for better one. Anyone?

    Read the article

  • How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • Software Architecture: How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • Writing better timesheet

    - by gunbuster363
    Recently, my company started to require us to fill out a monthly timesheet, writing down everything you do in office. A timesheet contain 29-31 days, depends on the number of days of the month. I need to write the things I did in every row of the excel file, which represent a day. This timesheet embarrasses me, because something like this can happen: I spent Monday writing a program, and the program was done. Because my boss didn't give me other program to write, basically I am just sitting there and pretending I am busy in the following days before my boss gives me another assignment. Of course I should not write it in the timesheet as it is. I can write it in the timesheet that I write the program using 4 days, but it makes me feel very inefficient. I can separate the process into 1) write the program, 2)deploy the program, 3)test the program, but that can make the process so long like 3 weeks, really. Have you encountered such a situation? How would you deal with this? EDIT: some people said I should be more proactive about asking for more assignments, but here is the situation: the boss of my boss gives some jobs to my boss, then my boss gives the jobs to me, sometimes I can also see my boss being quite less busy. One of my colleagues said that I should not ask for another assignment in a proactive manner, because it would be a headache for my boss to think a job out of nowhere for me. I don't want the things turn out like that, really.

    Read the article

  • Database development/admin - What exactly should I be trying to learn? [closed]

    - by Sauron
    I've been a bit weary about approaching learning databases. I've dabbled into them before, and "DATA" in itself appeals to me a lot. Maintaining/searching/moving, everything about it in the abstract sense I love. (This isn't a career question, this is a learning question.) But as RDBMS's begin to be easier to maintain, and with tools that "anyone" can use to manage data I feared for the database admin jobs. (Yet I see jobs for SQL everywhere!). I know "No-SQL" was a big deal but it kinda has its niche. But that leaves me here... unsure of really "what" to study. What tools should I have in my tool belt? I'm sure RDBMS's will be around in both use and maintaining legacy code. But what else? Obviously data will always be around, but is this a secure field or a dying one? And what should I be concentrating on?

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • Hello With Oracle Identity Manager Architecture

    - by mustafakaya
    Hi, my name is Mustafa! I'm a Senior Consultant in Fusion Middleware Team and living in Istanbul,Turkey. I worked many various Java based software development projects such as end-to-end web applications, CRM , Telco VAS and integration projects.I want to share my experiences and research about Fusion Middleware Products in this column. Customer always wants best solution from software consultants or developers. Solution will be a code snippet or change complete architecture. We faced different requests according to the case of customer. In my posts i want to discuss Fusion Middleware Products Architecture or how can extend usability with apis or UI customization and more and I look forward to engaging with you on your experiences and thoughts on this.  In my first post, i will be discussing Oracle Identity Manager architecture  and i plan to discuss Oracle Identity Manager 11g features in next posts. Oracle Identity Manager System Architecture Oracle Identity Governance includes Oracle Identity Manager,Oracle Identity Analytics and Oracle Privileged Account Manager. I will discuss Oracle Identity Manager architecture in this post.  In basically, Oracle Identity Manager is a n-tier standard  Java EE application that is deployed on Oracle WebLogic Server and uses  a database .  Oracle Identity Manager presentation tier has three different screen and two different client. Identity Self Service and Identity System Administration are web-based thin client. Design Console is a Java Swing Client that communicates directly with the Business Service Tier.  Identity Self Service provides end-user operations and delegated administration features. System Administration provides system administration functions. And Design Console mostly use for development management operations such as  create and manage adapter and process form,notification , workflow desing, reconciliation rules etc. Business service tier is implemented as an Enterprise JavaBeans(EJB) application. So you can extense Oracle Identity Manager capabilities.  -The SMPL and EJB APIs allow develop custom plug-ins such as management roles or identities.  -Identity Services allow use core business capabilites of Oracle Identity Manager such as The User provisioning or reconciliation service. -Integration Services allow develop custom connectors or adapters for various deployment needs. -Platform Services allow use Entitlement Servers, Scheduler or SOA composites. The Middleware tier allows you using capabilites ADF Faces,SOA Suites, Scheduler, Entitlement Server and BI Publisher Reports. So OIM allows you to configure workflows uses Oracle SOA Suite or define authorization policies use with Oracle Entitlement Server. Also you can customization of OIM UI without need to write code and using ADF Business Editor  you can extend custom attributes to user,role,catalog and other objects. Data tiers; Oracle Identity Manager is driven by data and metadata which provides flexibility and adaptability to Oracle Identity Manager functionlities.  -Database has five schemas these are OIM,SOA,MDS,OPSS and OES. Oracle Identity Manager uses database to store runtime and configuration data. And all of entity, transactional and audit datas are stored in database. -Metadata Store; customizations and personalizations are stored in file-based repository or database-based repository.And Oracle Identity Manager architecture,the metadata is in Oracle Identity Manager database to take advantage of some of the advanced performance and availability features that this mode provides. -Identity Store; Oracle Identity Manager provides the ability to integrate an LDAP-based identity store into Oracle Identity Manager architecture.  Oracle Identity Manager uses the human workflow module of Oracle Service Oriented Architecture Suite. OIM connects to SOA using the T3 URL which is front-end URL for the SOA server.Oracle Identity Manager uses embedded Oracle Entitlement Server for authorization checks in OIM engine.  Several Oracle Identity Manager modules use JMS queues. Each queue is processed by a separate Message Driven Bean (MDB), which is also part of the Oracle Identity Manager application. Message producers are also part of the Oracle Identity Manager application. Oracle Identity Manager uses a scheduled jobs for some activities in the background.Some of scheduled jobs come with Out-Of-Box such as the disable users after the end date of the users or you can define your custom schedule jobs with Oracle Identity Manager APIs. You can use Oracle BI Publisher for reporting Oracle Identity Manager transactions or audit data which are in database. About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • SQL SERVER – Powershell – Importing CSV File Into Database – Video

    - by pinaldave
    Laerte Junior is my very dear friend and Powershell Expert. On my request he has agreed to share Powershell knowledge with us. Laerte Junior is a SQL Server MVP and, through his technology blog and simple-talk articles, an active member of the Microsoft community in Brasil. He is a skilled Principal Database Architect, Developer, and Administrator, specializing in SQL Server and Powershell Programming with over 8 years of hands-on experience. He holds a degree in Computer Science, has been awarded a number of certifications (including MCDBA), and is an expert in SQL Server 2000 / SQL Server 2005 / SQL Server 2008 technologies. Let us read the blog post in his own words. I was reading an excellent post from my great friend Pinal about loading data from CSV files, SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video,   to SQL Server and was honored to write another guest post on SQL Authority about the magic of the PowerShell. The biggest stuff in TechEd NA this year was PowerShell. Fellows, if you still don’t know about it, it is better to run. Remember that The Core Servers to SQL Server are the future and consequently the Shell. You don’t want to be out of this, right? Let’s see some PowerShell Magic now. To start our tour, first we need to download these two functions from Powershell and SQL Server Master Jedi Chad Miller.Out-DataTable and Write-DataTable. Save it in a module and add it in your profile. In my case, the module is called functions.psm1. To have some data to play, I created 10 csv files with the same content. I just put the SQL Server Errorlog into a csv file and created 10 copies of it. #Just create a CSV with data to Import. Using SQLErrorLog [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) $ServerInstance=new-object (“Microsoft.SqlServer.Management.Smo.Server“) $Env:Computername $ServerInstance.ReadErrorLog() | export-csv-path“c:\SQLAuthority\ErrorLog.csv”-NoTypeInformation for($Count=1;$Count-le 10;$count++)  {       Copy-Item“c:\SQLAuthority\Errorlog.csv”“c:\SQLAuthority\ErrorLog$($count).csv” } Now in my path c:\sqlauthority, I have 10 csv files : Now it is time to create a table. In my case, the SQL Server is called R2D2 and the Database is SQLServerRepository and the table is CSV_SQLAuthority. CREATE TABLE [dbo].[CSV_SQLAuthority]( [LogDate] [datetime] NULL, [Processinfo] [varchar](20) NULL, [Text] [varchar](MAX) NULL ) Let’s play a little bit. I want to import synchronously all csv files from the path to the table: #Importing synchronously $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable-ServerInstanceR2D2-DatabaseSQLServerRepository-TableNameCSV_SQLAuthority-Data$DataTable Very cool, right? Let’s do it asynchronously and in background using PowerShell  Jobs: #If you want to do it to all asynchronously Start-job-Name‘ImportingAsynchronously‘ ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock {    ` $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable   -ServerInstance“R2D2″`                   -Database“SQLServerRepository“`                   -TableName“CSV_SQLAuthority“`                   -Data$DataTable             } Oh, but if I have csv files that are large in size and I want to import each one asynchronously. In this case, this is what should be done: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } How cool is that? Let’s make the funny stuff now. Let’s schedule it on an SQL Server Agent Job. If you are using SQL Server 2012, you can use the PowerShell Job Step. Otherwise you need to use a CMDexec job step calling PowerShell.exe. We will use the second option. First, create a ps1 file called ImportCSV.ps1 with the script above and save it in a path. In my case, it is in c:\temp\automation. Just add the line at the end: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } Get-Job | Wait-Job | Out-Null Remove-Job -State Completed Why? See my post Dooh PowerShell Trick–Running Scripts That has Posh Jobs on a SQL Agent Job Remember, this trick is for  ALL scripts that will use PowerShell Jobs and any kind of schedule tool (SQL Server agent, Windows Schedule) Create a Job Called ImportCSV and a step called Step_ImportCSV and choose CMDexec. Then you just need to schedule or run it. I did a short video (with matching good background music) and you can see it at: That’s it guys. C’mon, join me in the #PowerShellLifeStyle. You will love it. If you want to check what we can do with PowerShell and SQL Server, don’t miss Laerte Junior LiveMeeting on July 18. You can have more information in : LiveMeeting VC PowerShell PASS–Troubleshooting SQL Server With PowerShell–English Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology, Video Tagged: Powershell

    Read the article

  • 5 Things I Learned About the IT Labor Shortage

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein | Sr. Principal Product Marketing Director | Oracle Midsize Programs | @JimLein Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 5 Things I Learned About the IT Labor Shortage A gentle autumn breeze is nudging the last golden leaves off the aspen trees. It’s time to wrap up the series that I started back in April, “The Growing IT Labor Shortage: Are You Feeling It?” Even in a time of relatively high unemployment, labor shortages exist depending on many factors, including location, industry, IT requirements, and company size. According to Manpower Groups 2013 Talent Shortage Survey, 35% of hiring managers globally are having difficulty filling jobs. Their top three challenges in filling jobs are: 1. lack of technical competencies (hard skills) 2. Lack of available applicants 3. Lack of experience The same report listed Technicians as the most difficult position to fill in the United States For most companies, Human Capital and Talent Management have never been more strategic and they are striving for ways streamline processes, reduce turnover, and lower costs (see this Oracle whitepaper, “ Simplify Workforce Management and Increase Global Agility”). Everyone I spoke to—partner, customer, and Oracle experts—agreed that it can be extremely challenging to hire and retain IT talent in today’s labor market. And they generally agreed on the causes: a. IT is so pervasive that there are myriad moving parts requiring support and expertise, b. thus, it’s hard for university graduates to step in and contribute immediately without experience and specialization, c. big IT companies generally aren’t the talent incubators that they were in the freewheeling 90’s due to bottom line pressures that require hiring talent that can hit the ground running, and d. it’s often too expensive for resource-strapped midsize companies to invest the time and money required to get graduates up to speed. Here are my top lessons learned from my conversations with the experts. 1. A Better Title Would Have Been, “The Challenges of Finding and Retaining IT Talent That Matches Your Requirements” There are more applicants than jobs but it’s getting tougher and tougher to find individuals that perfectly fit each and every role. Top performing companies are increasingly looking to hire the “almost ready”, striving to keep their existing talent more engaged, and leveraging their employee’s social and professional networks to quickly narrow down candidate searches (here’s another whitepaper, “A Strategic Approach to Talent Management”). 2. Size Matters—But So Does Location Midsize companies must strive to build cultures that compete favorably with what large enterprises can offer, especially when they aren’t within commuting distance of IT talent strongholds. They can’t always match the compensation and benefits offered by large enterprises so it's paramount to offer candidates high quality of life and opportunities to build their resumes in alignment with their long term career aspirations. 3. Get By With a Little Help From Your Friends It doesn’t always make sense to invest time and money in training an employee on a task they will not perform frequently. Or get in a bidding war for talent with skills that are rare and in high demand. Many midsize companies are finding that it makes good economic sense to contract with partners for remote support rather than trying to divvy up each and every role amongst their lean staff. Internal staff can be assigned to roles that will have the highest positive impact on achieving organizational goals. 4. It’s Actually Both “What You Know” AND “Who You Know” If I was hiring someone today I would absolutely leverage the social and professional networks of my co-workers. Period. Most research shows that hiring in this manner is less expensive and time consuming AND produces better results. There is also some evidence that suggests new hires from employees’ networks have higher job performance and retention rates. 5. I Have New Respect for Recruiters and Hiring Managers My hats off to them—it’s not easy hiring and retaining top talent with today’s challenges. Check out the infographic, “A New Day: Taking HR from Chaos to Control”, on Oracle’s Human Capital Management solutions home page. You can also explore all of Oracle’s HCM solutions from that page based on your role. You can read all the posts in this series by clicking on the links in the right sidebar. Stay tuned…we’ll continue to post thought leadership on HCM and Talent Management topics.

    Read the article

  • How to configure Hudson and git plugin with an SSH key

    - by jlpp
    I've got Hudson (continuous integration system) with the git plugin running on a Tomcat Windows Service. msysgit is installed and the msysgit bin dir is in the path. PuTTY/Pageant/plink are installed and msysgit is configured to use them. When I run a job that attempts to clone the git repository I get the following error: $ git clone -o origin git@hostname:project.git "e:\HUDSON_HOME\jobs\Project Trunk\workspace" ERROR: Error cloning remote repo 'origin' : Could not clone git@hostname:project.git ERROR: Cause: Error performing git clone -o origin git@hostname:project.git e:\HUDSON_HOME\jobs\Project Trunk\workspace Trying next repository ERROR: Could not clone from a repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone Running git clone -o origin git@hostname:project.git "e:\HUDSON_HOME\jobs\Project Trunk\workspace" from the command line works without error. I've confirmed that my issue is not the same as http://stackoverflow.com/questions/1177292/hudson-git-clone-error because git is in the path and I don't get any error about the git executable on Hudson's Configure System page. This leads me to believe that the problem is that the user who owns the Tomcat/Hudson Windows service (Local System) has no SSH key set up to be able to clone the git repository. My question is, how can I set things up so that the git plugin/msysgit know to use a particular SSH key when trying to clone? I don't think Pageant will work because the Tomcat service is running as the "Local System" user, but I may be wrong. I have tried setting Pageant up as a service (using runassvc.exe), passing the appropriate key, and having it run as "Local System". The Tomcat/Hudson service doesn't seem to be able to see the key from the pageant service. Are there any other techniques for setting up a key? Thanks. EDIT: The discussion on http://n4.nabble.com/Hudson-with-git-and-ssh-td375633.html shows that someone else had a similar question. ssh-agent was suggested and this tool does come with msysgit but I'm not sure how to use it in conjunction with the Hudson service. Still, good clue if anyone can fill in the gaps. Thanks to Peter for the comment with the link. Also, the discussion on http://n4.nabble.com/questions-about-git-and-github-plug-ins-td383420.html starts off with the same question. I'm trying to resurrect that thread.

    Read the article

  • C# How can I tell if an IEnumerable is Mutable?

    - by Logan
    I want a method to update certain entries of an IEnumerable. I found that doing a foreach over the entries and updating the values failed as in the background I was cloning the collection. This was because my IEnumerable was backed by some LINQ-SQL queries. By changing the method to take a List I have changed this behavior, Lists are always mutable and hence the method changes the actual objects in the list. Rather than demand a List is passed is there a Mutable interface I can use? // Will not behave as consistently for all IEnumerables public void UpdateYesterday (IEnumerable<Job> jobs) { foreach (var job : jobs) { if (job.date == Yesterday) { job.done = true; } } }

    Read the article

  • Getting into a technology which requires experience when you have no experience

    - by dotnetdev
    It seems that Sharepoint is a technology which is very hard to get into. All the jobs in this tech require experience in working with it (Eg 2 years development experience in MOSS). If I wanted to get into this - but had no job that used the tech, how can I get experience in it to get an experienced job? Jobs state you need "2 years professional experience in MOSS 2007" but then if you have never done it, you won't get the job. The only possible way is you will be doing this at home and not in a team, but if you work in the mean time, that will negate this (it's not like teamworking is tech specific). Many people think if you decide to make a project at home you're just going to play about aimlessly rather than work to specs (where as in my current situation it's vice versa) but if you're dedicated, like me, you would write them - just not with the same presentation. Would employers treat experience at home as professional experience? Biztalk is another prime example of this. Thanks

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >