Search Results

Search found 17140 results on 686 pages for 'records management'.

Page 245/686 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • What are the right reverse PTR, domain keys, and SPF settings for two domains running the same appli

    - by James A. Rosen
    I just read Jeff Atwood's recent post on DNS configuration for email and decided to give it a go on my application. I have a web-app that runs on one server under two different IPs and domain names, on both HTTP and HTTPS for each: <VirtualHost *:80> ServerName foo.org ServerAlias www.foo.org ... </VirtualHost> <VirtualHost 1.2.3.4:443> ServerName foo.org ServerAlias www.foo.org </VirtualHost> <VirtualHost *:80> ServerName bar.org ServerAlias www.bar.org ... </VirtualHost> <VirtualHost 2.3.4.5:443> ServerName bar.org ServerAlias www.bar.org </VirtualHost> I'm using GMail as my SMTP server. Do I need the reverse PTR and SenderID records? If so, do I put the same ones on all of my records (foo.org, www.foo.org, bar.org, www.bar.org, ASPMX.L.GOOGLE.COM, ASPMX2.GOOGLEMAIL.COM, ..)? I'm pretty sure I want the domain-keys records, but I'm not sure which domains to attach them to. The Google mail servers? foo.org and bar.org? Everything?

    Read the article

  • Is my dns server being attacked? And what should I do about it?

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. In trying to figure this out, I've been looking at my logs to see if there are any errors I should know about. I found thousands of the following messages in my logs, from different ip's, but all requesting similar dns records: May 12 11:42:13 localhost named[26399]: client 94.76.107.2#36141: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#29075: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#47924: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#4727: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#16153: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#40267: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63507: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63721: query (cache) 'burningpianos.org/MX/IN' denied May 12 11:43:36 localhost named[26399]: client 82.209.240.241#3537: query (cache) 'burningpianos.com/MX/IN' denied I've read of Dan Kaminski's dns cache poisoning vulnerability, and I'm wondering if these log records are an attempt by some evildoer to attack my dns server. There are thousands of records in my logs, all requesting "burningpianos", some for com and some for org, most looking for an mx record. There are requests from multiple ip's, but each ip will request hundreds of times per day. So this smells to me like an attack. What is the defense against this?

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • Setting "Register this connection's addresses in DNS" using GPO

    - by ChamaraG
    Hi All, I need to get the Windows XP client machines in my network to dynamically update their DNS A records. The network is an AD domain running on Windows Server 2003 R2 servers with Win XP SP3 clients. Some machines already have the "Register this connection's addresses in DNS" check box checked and sucessfully update the DNS server. But some machines do not have this check box set and I need to set this. I read that this is possible using a GPO and I enabled the following: Computer configuration - Administrative templates - Network - DNS client Primary DNS Suffix Dynamic Update DNS Servers Connection-Specific DNS Suffix Register DNS records with connection-specific DNS suffix and where required, entered the relevant parameters. Running rsop.msc in the client machines shows that the GPO has been applied. The client machines have been rebooted. The DNS server allows "Nonsecure and secure" dynamic updates and is only accessible from our internal network. But, the "Register this connection's addresses in DNS" check box is not set. And the hosts without this set are not updating their DNS A records. Per another suggestion in a web site, i tried running "ipconfig /registerdns", but it does not add the DNS A record. Any advice on what I am doing wrong here would be gratefully accepted :-) Thank you.

    Read the article

  • Read Only Domain Controllers and DNS zone updates

    - by Mike M
    I have a Windows 2003 domain and just added a new DC that runs 2008 R2. I updated the schema accordingly for both forest and domain levels. I also made sure to run /rodcprep at the time I did this. I have a branch office with a 2008 R2 file/print server that is a read-only domain controller (DC). The one problem I have been having is with AD-integrated DNS records updates. In the data center, we had to make an IP address change on a particular server. All our other sites' DCs (2003) updated the record fine. The 2008 R2 DC in the data center also updates its record fine. However, the RODC in the branch office does not. So if I nslookup the target server on a 2003 DC, the IP address is correct. Same with the 2008 R2 DC in the data center. But an nslookup on the branch office RODC still pulls in the old IP address. Moreover, any new records we've created (e.g., just added a new terminal server) do not get updated on the branch RODC either. Is there something simple I'm missing? How do I get the RODC to sync its AD-integrated DNS records with the rest of my world? Thank you in advance for your responses. Mike

    Read the article

  • dns configuration error in plesk

    - by Karthik Malla
    I purchased a domain www.softmail.me at Godaddy.com and tried it DNS and getting lots of errors and finally change my nameservers to my server DNS i.e. NS101.VPSLAND.COM and NS202.VPSLAND.COM and created a domain in my plesk panel (Marked DNS & Mail required). After adding my domain to my plesk panel of my server I opened DNS records of that domain and found DNS records are automatically generated to my needs as following 65.75.241.26 / 24 PTR softmail.me. ftp.softmail.me. CNAME softmail.me. lists.softmail.me. CNAME softmail.me. mail.softmail.me. A 65.75.241.26 mssql.softmail.me. A 65.75.241.26 ns.softmail.me. A 65.75.241.26 sitebuilder.softmail.me. A 65.75.241.26 softmail.me. NS ns.softmail.me. softmail.me. A 65.75.241.26 softmail.me. MX (10) mail.softmail.me. webmail.softmail.me. A 65.75.241.26 www.softmail.me. CNAME softmail.me. Finally I waited for a week for I am unable to use my domain. Also in DNS lookup I cannot find any records to my Server except name servers of VPSland. Do I need to add VPSland namesevers anywhere in Plesk panel? If so where? Can anyone assist me where the mistake is?...

    Read the article

  • DNS Round-robin, Load Balancing, Load sharing, and failover in 2012

    - by user1089770
    I have been reading many posts on serverfault as well as on other sites regarding all these. What I understand is, Multiple A records(round-robin dns) can be used for both : Load sharing (round-robin, but NOT load-balancing). Many people say that “Load Balancing” but I think there will be no load-balancing because “Balance” means (literally) “compare two(or more) and adjust” (and that is what Real s/w or h/w Load balancers do) but Browsers never do this, instead they randomly select and IP and connect to it. It doesn't have any knowledge about the current load of that server (probably, the IP it picked had the highest load!). Automatic failover (latest browsers only). Yes, I think DNS can be used as a simple failover system (at least in 2012, I dont know when it actually "came in effect"). please refer to : http://webmasters.stackexchange.com/questions/10927/using-multiple-a-records-for-my-domain-do-web-browsers-ever-try-more-than-one and Browser-based DNS failover using multiple A records and http://www.nber.org/sys-admin/dns-failover.html I would like to make sure my assumptions/findings are right. So let me know please.....

    Read the article

  • Windows Server 2008 - inexplicable system time jumps/glitches/inaccuracies

    - by Nathan Ridley
    I'm running a production web server on Windows Server 2008. On this server I have a database which logs certain user actions, but every now and again I inexplicably get database entries which, according to the record ID and the records immediately before and after, have the wrong time logged against them (7 days+ too old). For example, record ID 1001 will be for Dec 7, 11pm, 1002 will be for Dec 7, 11:01pm, then 1003 will be for Nov 28, 1:38am, then the next will be back on track again. The problem seems to occur in random records (or 2-3 records in a row) and crops up once every few days. This is absolutely baffling because there is only one place in the application that assigns this date/time value and it's simply the system UTC date. I have been synchronizing the system time to time-a.nist.gov (which I read in another article was a bit more reliable than the default time.windows.com) and it seems to occasionally get out of time anyway (3-4 minutes), but I'm speculating that occasionally the time server has a temporary glitch where the date changes to a drastically wrong value for a short space of time, then changes back. Either that, or the motherboard clock battery is screwed and the reason the time momentarily changes is that the motherboard loses the time and then the time synchronization puts it back again. Could either of my suspicions be right? Should I turn off time synchronization for a production server? Assigning dates to an event log where the dates are up to 2 weeks prior to the actual date is a severe problem I can't have when the next version of my application is released. Any suggestions or advice would be appreciated.

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • error while resolving DNS requires

    - by user2803887
    I followed this document to configure master-slave powerdns servers... http://linuxmanage.com/master-slave-powerdns-managed-by-poweradmin.html Installation completed perfectly no errors even I feel DNS is trying to resolve some queries and parameter.. but while going through intodns.com i get below error for domain names which i have created in powerdns name server installed as above guide. Error Mismatched NS records WARNING: One or more of your nameservers did not return any of your NS records. Error Multiple Nameservers ERROR: Looks like you have less than 2 nameservers. According to RFC2182 section 5 you must have at least 3 nameservers, and no more than 7. Having 2 nameservers is also ok by me. Error Missing nameservers You should already know that your NS records at your reported by your nameservers are missing, so here it is again: nameservers ns1.makeittiny.com. ns2.makeittiny.com. I am much new to powerdns so not able to figure out where problem.. i check all things but not able to make out where problem remains.

    Read the article

  • django: error while migrating from 1.1 to 1.2

    - by gruszczy
    I am trying to migrate our django project from 1.1 to 1.2, but I get following error: Traceback (most recent call last): File "./manage.py", line 11, in <module> execute_manager(settings) File "/usr/local/lib/python2.6/dist-packages/django/core/management/__init__.py", line 438, in execute_manager utility.execute() File "/usr/local/lib/python2.6/dist-packages/django/core/management/__init__.py", line 379, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python2.6/dist-packages/django/core/management/base.py", line 191, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/local/lib/python2.6/dist-packages/django/core/management/base.py", line 209, in execute translation.activate('en-us') File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/__init__.py", line 66, in activate return real_activate(language) File "/usr/local/lib/python2.6/dist-packages/django/utils/functional.py", line 55, in _curried return _curried_func(*(args+moreargs), **dict(kwargs, **morekwargs)) File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/__init__.py", line 36, in delayed_loader return getattr(trans, real_name)(*args, **kwargs) File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/trans_real.py", line 193, in activate _active[currentThread()] = translation(language) File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/trans_real.py", line 176, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/usr/local/lib/python2.6/dist-packages/django/utils/translation/trans_real.py", line 159, in _fetch app = import_module(appname) File "/usr/local/lib/python2.6/dist-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/gruszczy/Programy/bozorth/../bozorth/notifications/__init__.py", line 2, in <module> from django.db.models.signals import post_save File "/usr/local/lib/python2.6/dist-packages/django/db/__init__.py", line 75, in <module> connection = connections[DEFAULT_DB_ALIAS] File "/usr/local/lib/python2.6/dist-packages/django/db/utils.py", line 92, in __getitem__ conn = backend.DatabaseWrapper(db, alias) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/sqlite3/base.py", line 154, in __init__ super(DatabaseWrapper, self).__init__(*args, **kwargs) TypeError: __init__() takes exactly 2 arguments (3 given) Anyone encountered this problem and know, how to solve?

    Read the article

  • Excel export displaying '#####...'

    - by Cypher
    I'm trying to export an Excel database into .txt (Tab Delimited), but some of my cells are quite large. When I export into a txt some of the cells are exported as '#######....' which is surprisingly useless. Has this happened to anyone else? Do you know an easy fix? Data from one cell of my column: Accounting, African Studies, Agricultural/Bioresource Engineering, Agricultural Economics, Agricultural Science, Anatomy/Cell Biology, Animal Biology, Animal Science, Anthropology, Applied Zoology, Architecture, Art History, Atmospheric/Oceanic Science, Biochemistry, Biology, Botanical Sciences, Canadian Studies, Chemical Engineering, Chemistry/Bio-Organic/Environmental/Materials,ChurchMusicPerformance, Civil Engineering/Applied Mechanics, Classics, Composition, Computer Engineering,ComputerScience, ContemporaryGerman Studies, Dietetics, Early Music Performance, Earth/Planetary Sciences, East Asian Studies, Economics, Electrical Engineering, English Literature/ Drama/Theatre/Cultural Studies, Entrepreneurship, Environment, Environmental Biology, Finance, Food Science, Foundations of Computing, French Language/Linguistics/Literature/Translation, Geography, Geography/ Urban Systems, German, German Language/Literature/Culture, Hispanic Languages/Literature/Culture,History,Humanistic Studies, Industrial Relations, Information Systems, International Business, International Development Studies, Italian Studies/Medieval/Renaissance, Jazz Performance, Jewish Studies, Keyboard Studies, Kindergarten/Elementary Education, Kindergarten/Elementary Education/Jewish Studies,Kinesiology, Labor/Management Relations, Latin American/Caribbean Studies, Linguistics, Literature/Translation, Management Science, Marketing, Materials Engineering,Mathematics,Mathematics/Statistics,Mechanical Engineering, Microbiology, Microbiology/Immunology, Middle Eastern Studies, Mining Engineering, Music, Music Education, MusicHistory,Music Technology,Music Theory,North American Studies, Nutrition,OperationsManagement,OrganizationalBehavior/Human Resources Management, Performing Arts, Philosophy, Physical Education, Physics, Physiology, Plant Sciences, Political Science, Psychology, Quebec Studies, Religious Studies/Scriptures/Interpretations/World Religions,ResourceConservation,Russian, Science for Teachers,Secondary Education, Secondary Education/Music, Secondary Education/Science, SocialWork, Sociology, Software Engineering, Soil Science, Strategic Management, Teaching of French/English as a Second Language, Theology, Wildlife Biology, Wildlife Resources, Women’s Studies.

    Read the article

  • Django Error - AttributeError: 'Settings' object has no attribute 'LOCALE_PATHS'

    - by Randy Simon
    I am trying to learn django by following along with this tutorial. I am using django version 1.1.1 I run django-admin.py startproject mysite and it creates the files it should. Then I try to start the server by running python manage.py runserver but here is where I get the following error. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 213, in execute translation.activate('en-us') File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 73, in activate return real_activate(language) File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 205, in activate _active[currentThread()] = translation(language) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 172, in _fetch for localepath in settings.LOCALE_PATHS: File "/Library/Python/2.6/site-packages/django/utils/functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'LOCALE_PATHS' Now, I can add a LOCAL_PATH atribute set to an empty string to my settings.py file but then it just complains about another setting and so on. What am I missing here?

    Read the article

  • python manage.py runserver fails

    - by Randy Simon
    I am trying to learn django by following along with this tutorial. I am using django version 1.1.1 I run django-admin.py startproject mysite and it creates the files it should. Then I try to start the server by running python manage.py runserver but here is where I get the following error. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 213, in execute translation.activate('en-us') File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 73, in activate return real_activate(language) File "/Library/Python/2.6/site-packages/django/utils/translation/__init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 205, in activate _active[currentThread()] = translation(language) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/Library/Python/2.6/site-packages/django/utils/translation/trans_real.py", line 172, in _fetch for localepath in settings.LOCALE_PATHS: File "/Library/Python/2.6/site-packages/django/utils/functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'LOCALE_PATHS' Now, I can add a LOCALE_PATH atribute and set to an empty tuple to my settings.py file but then it just complains about another setting and so on. What am I missing here?

    Read the article

  • Software development metrics and reporting

    - by David M
    I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation. As an example, my view is that code coverage is a useful metric in the following ways (and maybe others): For a team's own internal use when combined with other measurements. For facilitating/enabling/mentoring teams, where it might be instructive when considered on a team-by-team basis as a trend (e.g. if team A and B have coverage this month of 75 and 50, I'd be more concerned with team A than B if the previous month they'd had 80 and 40). For senior management when presented as an aggregated statistic across a number of teams or a whole department. But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code. I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation? I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations? EDIT We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.

    Read the article

  • Natural vs surrogate keys on support tables

    - by Bugeo
    I have read many articles about the battle between natural versus surrogate primary keys. I agree in the use of surrogate keys to identify records of tables whose contents are created by the user. But in the case of supporting tables what should I use? For example, in a hypothetical table "orderStates". If you use a natural key would have the following data: TABLE ORDERSTATES {ID: "NEW", NAME: "New"} {ID: "MANAGEMENT" NAME: "Management"} {ID: "SHIPPED" NAME: "Shipped"} If I use a surrogate key would have the following data: TABLE ORDERSTATES {ID: 1 CODE: "NEW", NAME: "New"} {ID: 2 CODE: "MANAGEMENT" NAME: "Management"} {ID: 3 CODE: "SHIPPED" NAME: "Shipped"} Now let's take an example: a user enters a new order. In the case in which use natural keys, in the code I can write this: newOrder.StateOrderId = "NEW"; With the surrogate keys instead every time I have an additional step. stateOrderId_NEW = .... I retrieve the id corresponding to the recod code "NEW" newOrder.StateOrderId = stateOrderId_NEW; The same will happen every time I have to move the order in a new status. So, in this case, what are the reason to chose one key type vs the other one?

    Read the article

  • Rebuilding old (2010) django project in 2012

    - by birgit
    I am trying to make an old Django project run again. After seemingly having solved issues with old sorl.thumbnail versions and deprecated expressions I now get this error when running python manage.py runserver I also tried to copy & paste my old files into a new Django project and get the exactly same error. Maybe someone here has a clue where the problem lies? Unhandled exception in thread started by <bound method Command.inner_run of <django.contrib.staticfiles.management.commands.runserver.Command object at 0x2a80510>> Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/django/core/management/commands/runserver.py", line 88, in inner_run self.validate(display_num_errors=True) File "/usr/lib/python2.7/dist-packages/django/core/management/base.py", line 249, in validate num_errors = get_validation_errors(s, app) File "/usr/lib/python2.7/dist-packages/django/core/management/validation.py", line 35, in get_validation_errors for (app_name, error) in get_app_errors().items(): File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 146, in get_app_errors self._populate() File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 61, in _populate self.load_app(app_name, True) File "/usr/lib/python2.7/dist-packages/django/db/models/loading.py", line 78, in load_app models = import_module('.models', app_name) File "/usr/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module __import__(name) File "/home/me/Documents/wdws/wdws/../wdws/cityofwindows/models.py", line 73, in <module> class Image(models.Model): File "/home/me/Documents/wdws/wdws/../wdws/cityofwindows/models.py", line 83, in Image 'large': {'size': (640, 640)}, File "/usr/lib/python2.7/dist-packages/django/db/models/fields/files.py", line 233, in __init__ super(FileField, self).__init__(verbose_name, name, **kwargs) TypeError: __init__() got an unexpected keyword argument 'extra_thumbnails' I need to re-build the project just for visual documentation locally... so also any hints on how to quickly re-run outdated django-projects are very welcome!! Thanks a lot (using Ubuntu 12.04)

    Read the article

  • Can JMX operations take interfaces as parameters?

    - by Thor84no
    I'm having problems with an MBean that takes a Map<String, Object> as a parameter. If I try to execute it via JMX using a proxy object, I get an Exception: Caused by: javax.management.ReflectionException at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:231) at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:668) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) Caused by: java.lang.IllegalArgumentException: Unable to find operation updateProperties(java.util.HashMap) It appears that it attempts to use the actual implementation class rather than the interface, and doesn't check if this is a child of the required interface. The same thing happens for extended classes (for example declare HashMap, pass in LinkedHashMap). Does this mean it's impossible to use an interface for such methods? At the moment I'm getting around it by changing the method signature to accept a HashMap, but it seems odd that I wouldn't be able to use interfaces (or extended classes) in my MBeans. Edit: The proxy object is being created by an in-house utility class called JmxInvocationHandler. The (hopefully) relevant parts of it are as follows: public class JmxInvocationHandler implements InvocationHandler { ... public static <T> T createMBean(final Class<T> iface, SFSTestProperties properties, String mbean, int shHostID) { T newProxyInstance = (T) Proxy.newProxyInstance(iface.getClassLoader(), new Class[] { iface }, (InvocationHandler) new JmxInvocationHandler(properties, mbean, shHostID)); return newProxyInstance; } ... private JmxInvocationHandler(SFSTestProperties properties, String mbean, int shHostID) { this.mbeanName = mbean + MBEAN_SUFFIX + shHostID; msConfig = new MsConfiguration(properties.getHost(0), properties.getMSAdminPort(), properties.getMSUser(), properties.getMSPassword()); } ... public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { if (management == null) { management = ManagementClientStore.getInstance().getManagementClient(msConfig.getHost(), msConfig.getAdminPort(), msConfig.getUser(), msConfig.getPassword(), false); } final Object result = management.methodCall(mbeanName, method.getName(), args == null? new Object[] {} : args); return result; } }

    Read the article

  • CodePlex Daily Summary for Friday, February 19, 2010

    CodePlex Daily Summary for Friday, February 19, 2010New ProjectsApplication Management Library: Application Management makes your application life easier. It will automatic do memory management, handle and log unhandled exceptions, profiling y...Audio Service - Play Wave Files From Windows Service: This is a windows service that Check a registry key, when the key is updated with a new wave file path the service plays the wave file.Aviamodels: 3d drawing AviamodelsControl of payment proofs program for Greek citizens: This is a program that is used for Greek citizens who want to keep track of their payment proofs.Cover Creator: Cover Creator gives you the possibility to create and print CD covers. Content of CD is taken from http://www.freedb.org/ or can be added/modyfied ...DevBoard: DevBoard is a webbased scrum tool that helps developers/team get a clear overview of the project progress. It's developed in C# and silverlight.Flex AdventureWorks: The is mostly a skunk-works application to help me get acclimated to CodePlex. The long term goal is to integrate a Flex UI with the AdventureWor...GRE Wordlist: An intuitive and customizable word list for GRE aspirants. Developed in Java using a word list similar to Barron's.Indexer: A desktop file Index and Search tool which allows you to choose a list of folders to index, and then search on later. It is based on Lucene.net an...Project Management Office (PMO) for SharePoint: Sample web part for the Code Mastery event in Boston, February 11, 2010.Restart SQL Audit Policy and Job: Resolve SQL 2008 Audit Network Connectivity Issue.Rounded Corners / DIV Container: The RoundedDiv round corners container is a skin-able, CSS compliant UI control. Select which corners should be rounded, collapse and expand the c...Silverlight Google Search Application: The Silverlight Google Search Application uses Google Search API and behaves like Internet Search Application with option to preview desired page i...Weather Forecast Control: MyWeather forecast control pulls up to date weather forecast information from The Weather Channel for your website.New ReleasesApplication Management Library: ApplicationManagement v1.0: First ReleaseAudio Service - Play Wave Files From Windows Service: Audio Service v1.0: This is a working version of the Audio Service. Please use as you need to.AutoMapper: 1.0.1 for Silverlight 3.0 Alpha: AutoMapper for Silverlight 3.0. Features not supported: IDataReader mapping IListSource mapping All other features are supported.Buzz Dot Net: Buzz Dot Net v.1.10219: Buzz Dot Net Library (Parser & Objects) + WPF Example (using MVVM & Threading)Canvas VSDOC Intellisense: V 1.0.0.0a: This release contains two JavaScript files: canvas-utils.js (can be referenced in both runtime and development environment) canvas-vsdoc.js (must ...Control of payment proofs program for Greek citizens: Payment Proofs: source codeCourier: Beta 2: Added Rx Framework support and re-factored how message registration and un-registration works Blog post explaining the updates and re-factoring c...Cover Creator: Initial release: This is initial stable release. For now only in Polish language.Employee Scheduler: Employee Scheduler 2.2: Small Bug found. Small total hour calculation bug. See http://employeescheduler.codeplex.com/WorkItem/View.aspx?WorkItemId=6059 Extract the files...EnhSim: Release v1.9.7.1: Release v1.9.7.1Implemented Dislodged Foreign Object trinket Whispering Fanged Skull now also procs off Flame shock dots You can toggle bloodlust o...Extend SmallBasic: Teaching Extensions v.007: added SimpleSquareTest added Tortoise.Approve() for virtual proctor how to use virtual proctor: change the path in the proctor.txt file (located i...FolderSize: FolderSize.Win32.1.0.1.0: FolderSize.Win32.1.0.1.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...GLB Virtual Player Builder: v0.4.0 Beta: Allows for user to import and use archetypes for building players. The archetypes are contained in the file "archetypes.xml". This file is editab...Google Map WebPart from SharePoint List: GMap Stable Release: GMap Stable ReleaseHenge3D Physics Library for XNA: Henge3D Source (2010-02 R2): Fixed a build error related to an assembly attribute in XBOX 360 builds. Tweaked the controls in the sample when targeting the 360. Reduced the...Indexer: Beta Release 1: Just the initial/rough cut.NukeCS: NukeCS 5.2.3 Source Code: update version to 5.2.3ODOS: ODOS STABLE 1.5.0: Thank you for your patience while we develop this version. Not that much has been added, though. Just doing some sub-conscious stuff to make life...PoshBoard: PoshBoard 3.0 Beta 1: Welcome to the first beta release of PoshBoard 3.0 ! IMPORTANT WARNING : this release is absolutly not feature complete and is error-prone. Okay, ...Restart SQL Audit Policy and Job: Restart SQL 2008 Audit Policy and Job: This folder contains three pieces of source code: Server Audit Status (Started).xml - Import this on-schedule policy into your server's Policy-Ba...SAL- Self Artificial Learning: Artificial Learning 2AQV Working Proof Of Concept: This is the Simulation proof of concept version that comes after the 1aq version. AQ stands for Anwering Questions.SharePoint 2010 Word Automation: SP 2010 Word Automation - Workflow Actions 1.1: This release includes two new custom workflow activities for SharePoint designer Convert Folder Convert Library More information about these new...SharePoint Outlook Connector: Version 1.0.1.1: Exception Logging has been improved.Sharpy: Sharpy 1.2 Alpha: This is the third Sharpy release. A change has been made to allow overriding the master page from the controller. The release contains the single ...Silverlight Google Search Application: SL Google Search App Alpha: This is just a first alpha version of the application, as it looks like when I uploaded it to CodePlex. The application works, requires Silverlight...Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity 1.0 RC: EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowser VS 2010 RC MVC 2 RC db MSSQL2008thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.6): This release is an update for WSCF.blue V1. Below are the bug fixes made since the V1.0.5 release: The data contract type filter was not including...TS3QueryLib.Net: TS3QueryLib.Net Version 0.18.13.0: Changelog Added overloads to all methods of QueryRunenr class handling permission tasks to allow passing of permission name instead of permissionid...Umbraco CMS: Umbraco 4.1 Beta 2: This is the second beta of Umbraco 4.1. Umbraco 4.1 is more advanced than ever, yet faster, lighter and simpler to use than ever. We, on behalf of...VCC: Latest build, v2.1.30218.0: Automatic drop of latest buildZack's Fiasco - Code Generated DAL: v1.2.4: Enhancements: SQL Server CRUD Stored Procedures added option for USE <db> added option to create or not create INSERT sprocs added option to cr...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETMicrosoft SQL Server Community & SamplesDotNetNuke® Community EditionMost Active ProjectsRawrSharpyDinnerNow.netBlogEngine.NETjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFacebook Developer ToolkitFluent Ribbon Control Suite

    Read the article

  • Bye Bye Year of the Dragon, Hello BPM

    - by Ajay Khanna
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As 2012 fades and we usher in a New Year, let’s look back at some of the hottest BPM trends and those we’ll be seeing more of in the coming months. BPM is as much about people as it is about technology. As people adopt new ways of engagement, new channels of communications and new devices to interact , the changes are reflected in BPM practices. As Social and Mobile have become an integral part of our personal and professional lives, we’ll see tighter integration of social and mobile with BPM, and more use cases emerging for smarter process management in 2013. And with products and services becoming less differentiated, organizations will strive to differentiate on Customer Experience. Concepts like Pace Layered Architecture and Dynamic Case Management will provide more flexibility and agility to IT groups and knowledge workers. Take a look at some of these capabilities we showcased (see video) at Oracle OpenWorld 2012. Some of these trends that will continue to gain momentum in 2013: Social networks and social media have provided a new way for businesses to engage with customers. A prospect is likely to reach out to their social network before making any purchase. Companies are increasingly engaging with customers in social networks to influence their purchasing decisions, as well as listening to customers via tools like sentiment analysis to see what customers think about a particular product or process. These insights are valuable as companies look to improve their processes. Inside organizations, workers are using social tools to engage with each other to design new products and processes. Social collaboration tools are being used to resolve issues where an employee needs consultation to reach a decision. Oracle BPM Suite includes social interaction as an integral part of its process design and work management to empower today’s business users. Ubiquitous smart mobile devices are trending as a tool of choice for many workers. Many companies are adopting the policy of “Bring Your Own Device,” and the device of choice is a tablet. Devices like smart phones and tablets not only provide mobility to workers and customers, but they also provide additional important information – the context. By integrating the mobile context (location, photos, and preferences) into your processes, organizations can make much more informed decisions, as well as offer more personalized service to customers. Using Oracle ADF Mobile, you can easily create user interfaces for mobile devices and also capture location data for process execution. Customer experience was at the forefront of trending topics in 2012. Organizations are trying to understand their customers better and offer them more personalized and differentiated services. Customer experience is paramount when companies design sales and support processes. Companies are looking to BPM to consistently and efficiently orchestrate customer facing processes across disparate systems, departments and channels of communication. Oracle BPM Suite provides just the right capabilities for organizations to design and deliver an excellent customer experience. Pace Layered Architecture strategy is gaining traction as a way to maximize agility and minimize disruption in organizations. It provides a framework to manage the evolution of your information system when different pieces of it are changing at different rates and need to be updated independent of one another. Oracle Fusion Middleware and Oracle BPM Suite are designed with this in mind. The database layer, integration layer, application layer, and process layer should not be required to change at the same time. Most of the business changes to policy or process can be done at the process layer without disrupting the whole infrastructure. By understanding the type of change needed at a particular level, organizations can become much more agile and efficient. Adaptive Case Management proposes more flexibility to manage processes or cases that do not follow a structured process flow. In such situations, the knowledge worker managing the case needs to evaluate what step should occur next because the sequence of steps can’t be predetermined. Another characteristic is that it requires much more collaboration than straight-through process. As simple processes become automated, and customers adopt more and more self-service, cases that reach the case workers are much more complex and need more investigation. Oracle BPM suite includes comprehensive adaptive case management capability to manage such unstructured and complex processes. Smart BPM or making your BPM intelligent has been the holy grail for BPM practitioners who imagined that one day BPM would become one with Business Intelligence, Business Activity Monitoring and Complex Event Processing, making it much more responsive and helpful in organizational decision making. In 2013, organizations will begin to deploy these intelligent BPM solutions. Oracle offers an integrated solution that brings together the powerful functionality of BI, BAM, event processing, and Real Time Decisions to help organizations create smart process based solutions. In order to help customers reach their BPM goals faster and remove risks associated with BPM initiatives, Oracle has introduced Oracle Process Accelerators, pre-built best practices applications built on Oracle BPM Suite that are fully production grade and ready to deploy. These are exiting times for BPM practitioners and there is so much to look forward to in 2013. We wish you a very happy and prosperous New Year 2013. Happy BPMing!

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • Big Data – Is Big Data Relevant to me? – Big Data Questionnaires – Guest Post by Vinod Kumar

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions of SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I think the series from Pinal is a good one for anyone planning to start on Big Data journey from the basics. In my daily customer interactions this buzz of “Big Data” always comes up, I react generally saying – “Sir, do you really have a ‘Big Data’ problem or do you have a big Data problem?” Generally, there is a silence in the air when I ask this question. Data is everywhere in organizations – be it big data, small data, all data and for few it is bad data which is same as no data :). Wow, don’t discount me as someone who opposes “Big Data”, I am a big supporter as much as I am a critic of the abuse of this term by the people. In this post, I wanted to let my mind flow so that you can also think in the direction I want you to see these concepts. In any case, this is not an exhaustive dump of what is in my mind – but you will surely get the drift how I am going to question Big Data terms from customers!!! Is Big Data Relevant to me? Many of my customers talk to me like blank whiteboard with no idea – “why Big Data”. They want to jump into the bandwagon of technology and they want to decipher insights from their unexplored data a.k.a. unstructured data with structured data. So what are these industry scenario’s that come to mind? Here are some of them: Financials Fraud detection: Banks and Credit cards are monitoring your spending habits on real-time basis. Customer Segmentation: applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. Customer Sentiment Analysis: Responding to negative brand perception on social or amplify the positive perception. Sales and Marketing Campaign: Understand the impact and get closer to customer delight. Call Center Analysis: attempt to take unstructured voice recordings and analyze them for content and sentiment. Medical Reduce Re-admissions: How to build a proactive follow-up engagements with patients. Patient Monitoring: How to track Inpatient, Out-Patient, Emergency Visits, Intensive Care Units etc. Preventive Care: Disease identification and Risk stratification is a very crucial business function for medical. Claims fraud detection: There is no precise dollars that one can put here, but this is a big thing for the medical field. Retail Customer Sentiment Analysis, Customer Care Centers, Campaign Management. Supply Chain Analysis: Every sensors and RFID data can be tracked for warehouse space optimization. Location based marketing: Based on where a check-in happens retail stores can be optimize their marketing. Telecom Price optimization and Plans, Finding Customer churn, Customer loyalty programs Call Detail Record (CDR) Analysis, Network optimizations, User Location analysis Customer Behavior Analysis Insurance Fraud Detection & Analysis, Pricing based on customer Sentiment Analysis, Loyalty Management Agents Analysis, Customer Value Management This list can go on to other areas like Utility, Manufacturing, Travel, ITES etc. So as you can see, there are obviously interesting use cases for each of these industry verticals. These are just representative list. Where to start? A lot of times I try to quiz customers on a number of dimensions before starting a Big Data conversation. Are you getting the data you need the way you want it and in a timely manner? Can you get in and analyze the data you need? How quickly is IT to respond to your BI Requests? How easily can you get at the data that you need to run your business/department/project? How are you currently measuring your business? Can you get the data you need to react WITHIN THE QUARTER to impact behaviors to meet your numbers or is it always “rear-view mirror?” How are you measuring: The Brand Customer Sentiment Your Competition Your Pricing Your performance Supply Chain Efficiencies Predictive product / service positioning What are your key challenges of driving collaboration across your global business?  What the challenges in innovation? What challenges are you facing in getting more information out of your data? Note: Garbage-in is Garbage-out. Hold good for all reporting / analytics requirements Big Data POCs? A number of customers get into the realm of setting a small team to work on Big Data – well it is a great start from an understanding point of view, but I tend to ask a number of other questions to such customers. Some of these common questions are: To what degree is your advanced analytics (natural language processing, sentiment analysis, predictive analytics and classification) paired with your Big Data’s efforts? Do you have dedicated resources exploring the possibilities of advanced analytics in Big Data for your business line? Do you plan to employ machine learning technology while doing Advanced Analytics? How is Social Media being monitored in your organization? What is your ability to scale in terms of storage and processing power? Do you have a system in place to sort incoming data in near real time by potential value, data quality, and use frequency? Do you use event-driven architecture to manage incoming data? Do you have specialized data services that can accommodate different formats, security, and the management requirements of multiple data sources? Is your organization currently using or considering in-memory analytics? To what degree are you able to correlate data from your Big Data infrastructure with that from your enterprise data warehouse? Have you extended the role of Data Stewards to include ownership of big data components? Do you prioritize data quality based on the source system (that is Facebook/Twitter data has lower quality thresholds than radio frequency identification (RFID) for a tracking system)? Do your retention policies consider the different legal responsibilities for storing Big Data for a specific amount of time? Do Data Scientists work in close collaboration with Data Stewards to ensure data quality? How is access to attributes of Big Data being given out in the organization? Are roles related to Big Data (Advanced Analyst, Data Scientist) clearly defined? How involved is risk management in the Big Data governance process? Is there a set of documented policies regarding Big Data governance? Is there an enforcement mechanism or approach to ensure that policies are followed? Who is the key sponsor for your Big Data governance program? (The CIO is best) Do you have defined policies surrounding the use of social media data for potential employees and customers, as well as the use of customer Geo-location data? How accessible are complex analytic routines to your user base? What is the level of involvement with outside vendors and third parties in regard to the planning and execution of Big Data projects? What programming technologies are utilized by your data warehouse/BI staff when working with Big Data? These are some of the important questions I ask each customer who is actively evaluating Big Data trends for their organizations. These questions give you a sense of direction where to start, what to use, how to secure, how to analyze and more. Sign off Any Big data is analysis is incomplete without a compelling story. The best way to understand this is to watch Hans Rosling – Gapminder (2:17 to 6:06) videos about the third world myths. Don’t get overwhelmed with the Big Data buzz word, the destination to what your data speaks is important. In this blog post, we did not particularly look at any Big Data technologies. This is a set of questionnaire one needs to keep in mind as they embark their journey of Big Data. I did write some of the basics in my blog: Big Data – Big Hype yet Big Opportunity. Do let me know if these questions make sense?  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

  • Oracle Enterprise Manager 12c Integration With Oracle Enterprise Manager Ops Center 11g

    - by Scott Elvington
    In a blog entry earlier this year, we announced the availability of the Ops Center 11g plug-in for Enterprise Manager 12c. In this article I will walk you through the process of deploying the plug-in on your existing Enterprise Manager agents and show you some of the capabilities the plug-in provides. We'll also look at the integration from the Ops Center perspective. I will show you how to set up the connection to Enterprise Manager and give an overview of the information that is available. Installing and Configuring the Ops Center Plug-in The plug-in is available for download from the Self Update page (Setup ? Extensibility ? Self Update). The plug-in name is “Ops Center Infrastructure stack”. Once you have downloaded the plug-in you can navigate to the Plug-In management page (Setup ? Extensibility ? Plug-ins) to begin deployment. The plug-in must first be deployed on the Management Server. You will need to provide the repository password of the SYS user in order to deploy the plug-in to the Management Server. There are a few pre-requisites that need to be completed on the Ops Center side before the plug-in can be deployed and configured on the desired Enterprise Manager agents. Any servers, whether physical or virtual, for which you wish to see metrics and alerts need to be managed by Ops Center. This means that the Operating System needs to have an Ops Center management agent installed as a minimum. The plug-in can provide even more value when Ops Center is also managing the other “layers of the stack”, for example the service processor, the blade chassis or the XSCF of an M-Series server. The more information that Ops Center has about the stack, the more information that will be visible within Enterprise Manager via the plug-in. In order to access the information within Ops Center, the plug-in requires a user to connect as. This user does not require any particular Ops Center permissions or roles, it simply needs to exist. You can create a specific “EMPlugin” user within Ops Center or use an existing user. Oracle recommends creating a specific, non-privileged user account within Ops Center for this purpose. From the Ops Center Administration section, select Enterprise Controller, click the Users tab and finally click the Add User icon to create the desired user account. For the purpose of this article I have discovered and managed the OS and service processor of the server where my Enterprise Manager 12c installation is hosted. With the plug-in deployed to the Management Server and the setup done within Ops Center, we're now ready to deploy the plug-in to the agents and configure the targets to communicate with the Ops Center Enterprise Controller. From the Setup menu select Add Targets then Add Targets Manually. Select the bottom radio button “Add Targets Manually by Specifying Target Monitoring Properties”, select Infrastructure Stack from the Target Type dropdown and finally, select the Monitoring Agent where you wish to deploy the plug-in. Click the Add Manually.... button and fill in the details for the new target using the appropriate hostname for your Enterprise Controller and the user and password details for the plug-in access user. After the target has been added to the agent you will need to allow a few minutes for the initial data collection to complete. Once completed you can see the new target in the All Targets list. All metric collections are enabled by default except one. To enable Infrastructure Stack Alarms collection, navigate to the newly added target and then to Target ? Monitoring ? Metric and Collection Settings. There you can click the “Disabled” link under Collection Schedule to enable collection and set your desired collection frequency. By default, a Warning level alert in Ops Center will equate to a Warning level event in Enterprise Manager and a Critical alert will equate to a Critical event. This mapping can be altered in the Metric and Collection Settings also. The default incident rules in Enterprise Manager only create incidents from Critical events so keep this in mind in case you want to see incidents generated for Warning or Info level alerts from Ops Center. Also, because Enterprise Manager already monitors the OS through it's Host target type, the plug-in does not pull OS alerts from Ops Center so as to prevent duplication. In addition to alert propagation, the plug-in also provides data for several reports detailing the topology and configuration of the stack as well as any hardware sensor data that is available. These are available from the Information Publisher Reports. Navigate there from the Enterprise ? Reports menu or directly from the Infrastructure Stack target of interest. As an example, here is a sample of the Hardware Sensors report showing some of the available sensor data. The report can also be exported to a CSV file format if desired. Connecting Ops Center to Enterprise Manager Repository For an Enterprise Manager user, the plug-in provides a deeper visibility to the state of the infrastructure underlying the databases and middleware. On the Ops Center side, there is also a greater visibility to the targets running on the infrastructure. To set up the Ops Center data collection, just navigate to the Administration section and select the Grid Control link. Select the Configure/Connect action from the right-hand menu and complete the wizard forms to enable the connection to the Enterprise Manager repository and UI. Be sure to use the sysman account when configuring the database connection. Once the job completes and the initial data synchronization is done you will see new Target tabs on your OS assets. The new tab lists all the Enterprise Manager targets and any alerts, availability and performance data specific to the selected target. It is also possible to use the GoTo icon to launch the Enterprise Manager BUI in context of the specific target or alert to drill into more detail. Hopefully this brief overview of the integration between Enterprise Manager and Ops Center has provided a jumpstart to getting a more complete view of the full stack of your enterprise systems.

    Read the article

  • La Universidad Estatal de California estandariza 23 campus a través de Oracle PeopleSoft

    - by Noelia Gomez
    Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} El sistema universitario más grande de los Estados Unidos consolida la Gestión Financiera y consolidará la gestión del capital humano para mejorar la eficiencia y reducir los costes. Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} La Universidad Estatal de California (CSU) está estandarizando su sistema de 23 campus y la Oficina del Rector con las aplicaciones de Oracle PeopleSoft. La CSU es el mayor sistema universitario público en los Estados Unidos. Los premios CSU con 90.000 grados por año y desde su creación en 1961, ha otorgado casi 2,6 millones. Como parte de su plan estratégico, “Acces to Excellence”, la CSU se ha comprometido a tomar ventaja de la tecnología para satisfacer las futuras necesidades de educación y ha creado los Sistemas Comunes de Gestión (CMS). La misión de CMS es brindar un servicio de calidad eficiente, eficaz y de calidad a los estudiantes, profesores y personal de los 23 campus de la CSU y la Oficina del Decanato. En un esfuerzo por mejorar la eficiencia y reducir los costes de todo el sistema, la CSU ha consolidado sus procesos de gestión financiera y los sistemas a través de PeopleSoft Financial Management. Para proporcionar una visión unificada de las operaciones financieras, CSU ha consolidado 22 campus en un mismo sistema a través de PeopleSoft Financials. Estas aplicaciones incluyen: Contabilidad, Facturación, Cuentas a Pagar, Cuentas a Cobrar y Compras. CSU también utiliza Oracle Business Intelligence Enterprise Edition para el análisis y presentación de informes. CSU está consolidando en PeopleSoft Human Capital Management (HCM) en pos de varios objetivos estratégicos como, entre otros: atraer y retener al personal superior y de la facultad. El sistema financiero consolidad de la CSU es ahora el mayor despliegue único de educación superior de PeopleSoft Financials en los Estados Unidos. Una centralización de esta envergadura no tiene precedentes, y sin embargo todavía se completó a tiempo y dentro del presupuesto. Una vez completado, la CSU también será el mayor despliegue de educación superior de PeopleSoft HCM. CSU también utiliza PeopleSoft Campus Solutions para gestionar sus operaciones académicas de los estudiantes, incluyendo la programación y registro de clases, cálculo y recaudación de las tasas, la concesión de la ayuda financiera y la evaluación y el progreso académico de los estudiantes. Las aplicaciones de Oracle PeopleSoft están diseñadas para atender las necesidades de negocio más complejas. Estas proveen soluciones integrales de negocios y de industria, permitiendo a las organizaciones aumentar la productividad, acelerar el rendimiento del negocio y un menor coste total de propiedad.PeopleSoft de Oracle Campus Solutions es una completa suite de software diseñado para las necesidades cambiantes de las instituciones de educación superior. Oracle sigue colaborando con la CSU y los colegios y universidades de todo el mundo para ofrecer los sistemas de administración y desarrollo de estudiantes más sensible y comprensivo y disponibles en la actualidad. Unisys está implantando aplicaciones PeopleSoft CSU y facilitando el acceso a los estudiantes, profesores y personal de las universidades a través de la solución de Unisys Hosted Secure Private Cloud. Unisys es un miembro de nivel Platino de Oracle PartnerNetwork (OPN). "Con el fin de seguir cumpliendo los objetivos de nuestro plan estratégico, creemos que es fundamental para estandarizar nuestros procesos operativos - tanto en el back office como allí donde lleguen nuestros estudiantes todos los días - maximizar nuestra inversión en tecnología y reducir costes", dijo Larry Furukawa-Schlereth, Director de Finanzas, Sonoma State University, y presidente del Comité Global Ejecutivo de Gestión de la CSU, California State University. "Contar con una vista única de toda la información importante en un sistema ayuda a la Universidad Estatal de California a continuar ofreciendo excelentes oportunidades de educación a un coste asequible mientras ayudamos a dar forma a la futura calidad de vida cívica y económica en California". Conozca más sobre Peoplesoft aquí: Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle’s PeopleSoft Applications Oracle in Higher Education Oracle’s PeopleSoft Financial Management Oracle’s PeopleSoft Human Capital Management Oracle’s PeopleSoft Campus Solutions Oracle Human Capital Management Blog Oracle HCM on Twitter Oracle Higher Education on Facebook Oracle Higher Education on Diigo

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >