Search Results

Search found 26214 results on 1049 pages for 'farm solution'.

Page 501/1049 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • Non-USB powered laptop cooling pad?

    - by Andrei Rinea
    On my laptop (el-cheapo but pretty good) I have all my 3 USB ports occupied by devices in use (external mouse, external keyboard, 3G internet/flash drive/external hard drive/etc). I want a cooling pad for my hard-worked laptop but all seem to be powered by USB... This laptop is mainly stationary. Either is available a solution to power the USB-powered cooling pad other than connecting it to the laptop... ... either is there a non-USB powered laptop cooling pad?

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • How can I use iteration to lead targets?

    - by e100
    In my 2D game, I have stationary AI turrets firing constant speed bullets at moving targets. So far I have used a quadratic solver technique to calculate where the turret should aim in advance of the target, which works well (see Algorithm to shoot at a target in a 3d game, Predicting enemy position in order to have an object lead its target). But it occurs to me that an iterative technique might be more realistic (e.g. it should fire even when there is no exact solution), efficient and tunable - for example one could change the number of iterations to improve accuracy. I thought I could calculate the current range and thus an initial (inaccurate) bullet flight time to target, then work out where the target would actually be by that time, then recalculate a more accurate range, then recalculate flight time, etc etc. I think I am missing something obvious to do with the time term, but my aimpoint calculation does not currently converge after the significant initial correction in the first iteration: import math def aimpoint(iters, target_x, target_y, target_vel_x, target_vel_y, bullet_speed): aimpoint_x = target_x aimpoint_y = target_y range = math.sqrt(aimpoint_x**2 + aimpoint_y**2) time_to_target = range / bullet_speed time_delta = time_to_target n = 0 while n <= iters: print "iteration:", n, "target:", "(", aimpoint_x, aimpoint_y, ")", "time_delta:", time_delta aimpoint_x += target_vel_x * time_delta aimpoint_y += target_vel_y * time_delta range = math.sqrt(aimpoint_x**2 + aimpoint_y**2) new_time_to_target = range / bullet_speed time_delta = new_time_to_target - time_to_target n += 1 aimpoint(iters=5, target_x=0, target_y=100, target_vel_x=1, target_vel_y=0, bullet_speed=100)

    Read the article

  • Folder Size Column on Explorer on Windows Vista/Seven

    - by Click Ok
    I'm a big fan of FolderSize, but unfortunately it works only on Windows XP. Even reading this and this, I'm not convinced that I cannot to have a column showing the folder size on Windows Explorer. Even with all "problems" FolderSize worked like a charm in WindowsXP. In a sysadmin life, FolderSize is explendid. Before select a lot of folders to send to backup in DVDs, I can check directly in Windows Explorer the size of the folders and get a set of folders with 4.3Gb to burn in a DVD. In another situation, I can view in the root folder the size of the bigger folders in the hard drive and start a good strategy of backup/partitioning/transfer to another drive/etc. If desired, I can tell a lot of another needs that in my sysadmin life I need a tool like FolderSize... There is someone that is actively developing a solution to show folder size on Windows Explorer in Vista/Seven Windows? What the problems that I can face if I develop myself that "add-in" for Windows Explorer?

    Read the article

  • Unable to install SQL Server 2008 Express SP1

    - by dahacker89
    I am facing difficulties installing the MS SQL Server Express 2008 Service Pack 1. I already have MS SQL Server Express 2008 installed and all I want to do is to install the SP1 however I get following error message even though all features are selected, it still tells me to select one or more features: Also just for information, when I open the SQL Server Configuration Manager to manage my SQL Server Services, the following error message is displayed: If anyone who has faced this and has a solution then please let me know, as my aim is to install management studio, but for that I must have SP1 installed at least, and I'm stuck at that point. Thanks.

    Read the article

  • ASP.NET Website not responding

    - by brinthhillerup
    Hello everybody I am setting with my server tech, trying to figure out what is going on here. When I try to access my site through a browser, my site hangs indefinately. It does not respond with anything, not even after 15 minutes! The application pool has been restarted, the website has been restarted. Doesn't helpt at all. It happened during an upload. A backup has been loaded and the problem is still the same It is a very urgent matter, I hope someone can help me! I think we are on a IIS 6 on a Windows 2003 Server. (Remote hosted, with a tech, who has no idea what the issue is as well, so I am trying to find the solution alogn with him). Any suggestions are VERY appriciated.

    Read the article

  • Linux terminal - frozen update of input but can execute commands?

    - by Torxed
    How do i restart a shell session from within SSH when it looks something like this: anton@ubuntu:~$ c: command not found anton@ubuntu:~$ lib anton@ubuntu:~$ this is working, but its messed up anton@ubuntu:~$ I can execute commands, but as i input them nothing shows on the console, but as soon as i press enter the command executes and the output comes (without line-endings, as shown above) exec bash bash --login clear nothing really works, restarting the SSH session however works. Temporary solution is to start a screen session and every time the interface freezes you simply do Ctrl+a-c to start a new session and close the old one..

    Read the article

  • Best approach for utility class library using Visual Studio

    - by gregsdennis
    I have a collection of classes that I commonly (but not always) use when developing WPF applications. The trouble I have is that if I want to use only a subset of the classes, I have three options: Distribute the entire DLL. While this approach makes code maintenance easier, it does require distributing a large DLL for minimal code functionality. Copy the classes I need to the current application. This approach solves the problem of not distributing unused code, but completely eliminates code maintenance. Maintain each class/feature in a separate project. This solves both problems from above, but then I have dramatically increased the number of files that need to be distributed, and it bloats my VS solution with tiny projects. Ideally, I'd like a combination of 1 & 3: A single project that contains all of my utility classes but builds to a DLL containing only the classes that are used in the current application. Are there any other common approaches that I haven't considered? Is there any way to do what I want? Thank you.

    Read the article

  • Entering Safe Mode problem

    - by NikolaysGS
    Hi! When i start my pc i get following message: The Logon User Interface DLL RtlGina2.dll failed to load I found solution here But i ca not log into my pc, neither in safe/normal mode. It is interesting that first time i succeed loging into safe mode, but change msconfig boot.ini to safemode with networking where i can not log in. And now every time no matter what i am choosing from F8 menu, i enter in safe mode with networking(where i get following error). So is it any way to log in back into "pure" safe mode. I am not sure that the question it`s clear enough ;-(

    Read the article

  • A new CAPTCHA using sentences?

    - by Xeoncross
    I was just thinking about how recaptcha is getting harder when I thought about another posible solution. Images won't last forever so we will need something else some day - like human logic or emotion. Google and others are trying grouping images by category (find the image that doesn't belong) but that requires a large amount of images and doesn't work for the blind. Anyway, what if a massive collection of text was gathered (public-domain books from each language) and a sentence was shown to the user with 1 (or 2) words that were a select box of choices? Only computers that knew correct English/Spanish/German grammar would be able to tell which of the words belonged in the sentence. Would there be any problems with this approach? I would assume that it would be easy enough for anyone that knew the language that the sentense was displayed in to figure out the answer easier than trying to read the reCAPTCHA text. Plus, storing an insane number of sentences would only take a couple gigabytes of space and wouldn't take anywhere near the CPU time creating images/audio takes. In other words, anyone could host their own captcha system with minimal impact on system performance. Is there a problem with this approach? More specifically I'm looking for the main problem with this approach. migrated from stackoverflow

    Read the article

  • xauth error with ssh X Forwarding

    - by bdk
    From my (Debain) Desktop machine, I am trying to ssh into a Debian Server with ssh -X remote-ip After logging into the remote host, I get: /usr/bin/X11/xauth: creating new authority file /root/.Xauthority /usr/bin/X11/xauth: (stdin):1: bad display name "unix:10.0" in "remove" command /usr/bin/X11/xauth: (stdin):2: bad display name "unix:10.0" in "add" command And the X Forwarding doesn't work. From my Desktop I can ssh -X into other Debian servers and it works fine. I found a lot of threads discussing similar issues on google, but they all seem to fade out without a solution, and the simple things suggested there like exporting DISPLAY or setting xhost + don't seem to make a difference.

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Notepad Automatically Launches Upon Boot Up of my Windows 7 64-Bit Desktop PC

    - by Simon
    Around 12 months ago, When my Windows 7 PC boots up, Notepad started to appear automatically on the desktop (as a duplicate ie one on top of the other), both displaying the same writing inside, and the name of the Notepad was: desktop-Notepad. [.ShellClassInfo] LocalizedResourceName=@%SystemRoot%\system32\shell32.dll,-21787 I cant remember the exact details as its been so long now but a friend told me, this was due to a "hidden attribute" belonging to notepad becoming "unhidden"and therefore showing itself on the desktop at startup. He showed me a series of steps to either hide the attribute again or to delete it ( telling me that upon restart of my computer it would be recreated/regenerated.) I did indeed follow his guidance however after trying to hide it or delete it (it was never recreated after restart which supposedly it should have done) One notepad still is autolaunched with the text as shown above everytime I startup my PC. As I found this forum I thought I might try again for a solution to prevent notepad from launching. Thankyou

    Read the article

  • What are functional-programming ways of implementing Conway's Game of Life

    - by George Mauer
    I recently implemented for fun Conway's Game of Life in Javascript (actually coffeescript but same thing). Since javascript can be used as a functional language I was trying to stay to that end of the spectrum. I was not happy with my results. I am a fairly good OO programmer and my solution smacked of same-old-same-old. So long question short: what is the (pseudocode) functional style of doing it? Here is Pseudocode for my attempt: class Node update: (board) -> get number_of_alive_neighbors from board get this_is_alive from board if this_is_alive and number_of_alive_neighbors < 2 then die if this_is_alive and number_of_alive_neighbors > 3 then die if not this_is_alive and number_of_alive_neighbors == 3 then alive class NodeLocations at: (x, y) -> return node value at x,y of: (node) -> return x,y of node class Board getNeighbors: (node) -> use node_locations to check 8 neighbors around node and return count nodes = for 1..100 new Node state = new NodeState(nodes) locations = new NodeLocations(nodes) board = new Board(locations, state) executeRound: state = clone state accumulated_changes = for n in nodes n.update(board) apply accumulated_changes to state board = new Board(locations, state)

    Read the article

  • Better Method of Opening TTY Permissions

    - by VxJasonxV
    At work, I have a few legacy servers that I log into as root, and then su down to a user. I continue to run into an issue where after doing so, I am unable to run screen as this user. I don't want to open screen as root, because then I have to consciously su down the user every new shell, and I often forget. The question is, is there an easier resolution to this than I'm currently aware of? My current solution is to find my terminal pts number, then set it chmod 666. I'm looking for something akin to X11's xhost ACL management, if such a thing exists for this situation.

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Mouse button and keypress counter for Linux?

    - by rakete
    I would like to have some kind of statistic of my daily mouse/keyboard usage to help me make my keyboard layout a little bit more efficient. There is already an question about how to do this on windows, but I would like to know I anyone is aware if this is possible under linux. Another thing I already found is key-mon, a little program for screencasts that displays your mouse and keyboard presses on the screen, which would help me achieve what I want with a little bit of python coding by myself. But still, if there was an solution already, that would be easier of course. PS: obfuscated link to key-mon because of spam prevention: hxxp://code.google.com/p/key-mon/

    Read the article

  • How do you keep from running into the same problems over and over?

    - by Stephen Furlani
    I keep running into the same problems. The problem is irrelevant, but the fact that I keep running into is completely frustrating. The problem only happens once every, 3-6 months or so as I stub out a new iteration of the project. I keep a journal every time, but I spend at least a day or two each iteration trying to get the issue resolved. How do you guys keep from making the same mistakes over and over? I've tried a journal but it apparently doesn't work for me. [Edit] A few more details about the issue: Each time I make a new project to hold the files, I import a particular library. The library is a C++ library which imports glew.h and glx.h GLX redefines BOOL and that's not kosher since BOOL is a keyword for ObjC. I had a fix the last time I went through this. I #ifndef the header in the library to exclude GLEW and GLX and everything worked hunky-dory. This time, however, I do the same thing, use the same #ifndef block but now it throws a bunch of errors. I go back to the old project, and it works. New project no-worky. It seems like it does this every time, and my solution to it is new each time for some reason. I know #defines and #includes are one of the trickiest areas of C++ (and cross-language with Objective-C), but I had this working and now it's not.

    Read the article

  • How to revert back to older xorg?

    - by wouter205
    Since the last update of ubuntu 12.04, the system won't boot into gui anymore. It states that it was unable to load the graphics drivers and gives me 4 options, these are: run in low graphics mode for 1 session reconfigure graphics troubleshoot the error exit to console login whichever option I choose, it doesn't solve anything. For instance, when i choose to reconfigure graphics and then switch to vesa drivers, the screen goes back to the option list. So i configured xorg.conf to vesa myself, and i see in the update history that ubuntu updated xserver-xorg-core and xserver-common whereas i blocked these updates in synaptic since i'm aware that updating these files caused troubles with my particular video card (radeon hd6800). So my solution is probably to revert these files to the older (working) versions. How can I do this please (in particular xserver-xorg-core since i think this is the main cause of my problem). thanks! output of ls /etc/X11 -app-defaults xorg.conf Xreset -cursors xorg.conf-backup-120529144709 Xreset.d -default-display-manager xorg.conf.fglrx-0 -Xresources -fonts xorg.conf.fglrx-1 Xsession -rgb.txt xorg.conf.fglrx-2 Xsession.d -X xorg.conf.original-0 -Xsession.options -xinit xorg.conf.original-1 -Xwrapper.config -xkb xorg.conf.vesa Output of sudo aptitude show xserver-xorg-core | grep Versie (read: Version in Flemish) Versie: 2:1.11.4-0ubuntu10.2

    Read the article

  • Middleware Oracle Excellence Awards 2012 & HAPPY NEW YEAR!

    - by JuergenKress
    Thanks for the FY12 middleware business! Make sure you become our WebLogic partner of the year! The Oracle Excellence Awards 2012 are Open for Nominations Middleware Specialized Partners: Submit your Nominations for the Middleware Specialized Partner of the Year by 29 June! The Specialized Partner of the Year Award celebrates OPN Specialized partners in EMEA who have demonstrated success with specialization, delivering customer value, and outstanding solution or service innovation in categories that complement OPN Specialization investments. Nominate now to receive the recognition you deserve! Winners of the Specialized Partner of the Year - EMEA Awards will each receive: $5k MDF for market expansion and promotion of their winning solutions/services extensive visibility across the extended Oracle community through interviews, advertising and video prestige and recognition by being awarded in a ceremony at Oracle OpenWorld. In addition, winners from all the Oracle Excellence Awards categories will receive a free registration to Oracle OpenWorld 2012 in San Francisco, California, as well as be showcased at the conference in October, be given an opportunity to mingle with Oracle executives and their peers, and be featured in Oracle Magazine. Nomination tips: · Build your nomination with Oracle · Provide evidence of your success · Send supporting documents here. · Get a quote from Oracle product management or myself! Closing date: 29 June Full details of all Oracle Awards offered this year are available on the Oracle Excellence Awards Website. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Oracle Excellence Awards 2012,SOA Specialization award,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Middleware Oracle Excellence Awards 2012 - HAPPY NEW YEAR!

    - by JuergenKress
    Thanks for the FY12 middleware business! Make sure you become our SOA & BPM partner of the year! The Oracle Excellence Awards 2012 are Open for Nominations Middleware Specialized Partners: Submit your Nominations for the Middleware Specialized Partner of the Year by 29 June! The Specialized Partner of the Year Award celebrates OPN Specialized partners in EMEA who have demonstrated success with specialization, delivering customer value, and outstanding solution or service innovation in categories that complement OPN Specialization investments. Nominate now to receive the recognition you deserve! Winners of the Specialized Partner of the Year - EMEA Awards will each receive: $5k MDF for market expansion and promotion of their winning solutions/services extensive visibility across the extended Oracle community through interviews, advertising and video prestige and recognition by being awarded in a ceremony at Oracle OpenWorld. In addition, winners from all the Oracle Excellence Awards categories will receive a free registration to Oracle OpenWorld 2012 in San Francisco, California, as well as be showcased at the conference in October, be given an opportunity to mingle with Oracle executives and their peers, and be featured in Oracle Magazine. Nomination tips: · Build your nomination with Oracle · Provide evidence of your success · Send supporting documents here. · Get a quote from Oracle product management or myself! Closing date: 29 June Full details of all Oracle Awards offered this year are available on the Oracle Excellence Awards Website. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Oracle Excellence Awards 2012,SOA Specialization award,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Spirent Communications Improves Customer Experience with Knowledge Management

    - by Tony Berk
    Spirent Communications plc is a global leader in test and measurement inspiring innovation within development labs, communication networks and IT organizations. The world’s leading communications companies rely on Spirent to help design, develop, validate, and deliver world-class network, devices, and services. Spirent’s customers require high levels of support for a diverse and complex product portfolio, and the company is committed to delivering on this requirement. Spirent needed a solution to help its customers get the information they need quickly and at their convenience through its Web site. After evaluating several solutions, Spirent selected and deployed Oracle Knowledge for Web Self Service Enterprise Edition. Oracle Knowledge Management uses natural language processing to understand the true intent of each inquiry logged via the support portal’s search function. The Spirent Knowledge Base on the company’s Customer Support Center (CSC) finds the best possible answer using search enhancement features?such as communications industry-specific libraries and federation to search external sources. Spirent has reduced contact center call volume while better serving its customers. Each time a customer uses the knowledge base, they find answers faster than by calling, and it saves Spirent an average of US$210 per call?which is significant when multiplied across the thousands of calls received monthly. Oracle Knowledge also helps support engineers find answers more quickly, enabling the company to scale without adding additional support engineers. Oracle Knowledge is integrated with Spirent's Siebel Contact Center implementation to provide an integrated desktop for CRM and agent intelligence, avoiding the need for contact center personnel to toggle between various screens to address customer inquiries, thereby accelerating customer service. Click here to learn more about Sprient's use of Siebel CRM and Oracle Knowledge Management.

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • How to change system time or force a time sync

    - by cpury
    My laptop is probably running out of CMOS battery, I know I have to fix it soon, but until then, this very annoying issue keeps me from using it. Scenario: My system clock is reset to 15/12/08 11:00 AM every time I turn on my computer. This has all sorts of side-effects, one of the more annoying being that I can't log into my gmail. At first I just waited for a time sync to happen, as I have that activated and all. It never happened. I googled and didn't find any way of enforcing a time sync, which I found very strange. Is there really none? Setting the time and date by hand is also a problem. For my 12.10 installation, the time & date settings are bugged. I remember it being for my last, older installation as well, though. Of course the easiest way should be to just manually edit the date and time fields by entering a new date. This is possible in theory, but the changes are reverted as soon as the text boxes loose focus. The other way to do it is to click the +-buttons for a long, long time. The first time I did that, the changes weren't stored either. I found out that afterwards I have to switch from manual to internet-sync mode and wait ~5 seconds until the in the top left corner of my system the new time is shown, or otherwise it won't have effect. So a nice solution would be one of the following: Setting the time/date by hand, maybe via terminal, so I can just enter the right values. Or, a command that would enforce an immediate time sync, that I can run after booting. I know I have to change the batteries soon, but this is seriously keeping me from working...

    Read the article

  • Ugly/Inconsistent Theming in Ubuntu Gnome 12.10

    - by Erland
    Some applications are displaying really ugly widgets and menus. I think it's a GTK issue and perhaps more particularly, only applies to GTK2 apps but I'm not sure. The numerous questions on here that deal with GTK2 v GTK3 themes do not answer my problem. Here is my situation: I'm using Ubuntu Gnome with Gnome Shell (installed using the "upgrade" instructions, rather than fresh install) with the default Adwaita theme The reason I did an upgrade instead of fresh install is because I'm on a Macbook Air and there is no mac image/iso for Ubuntu Gnome Previously, I did a fresh install of Ubuntu Gnome 12.10 and had no theming problems Now, apps like nautilus, rhythmbox, brasero, even third-party ones like Lightread look exactly as expected but other apps, including Firefox, Inkscape, GIMP, Libreoffice look awful. Some examples: Firefox with ugly location bar: http://ubuntuone.com/3e2X0JTa4CT4afC4303U9c vs nautilus location bar: http://ubuntuone.com/3TbHWWuNMcJnlpI4IpjiUO GIMP file dialogue (like Windows 95!): http://ubuntuone.com/4ioCcqq3flgO7zAWgAhfWy vs the rhythmbox file dialogue (correct): http://ubuntuone.com/2xLplCOBvQnyeqdsTGdgXq Menus in Libreoffice (very bad for usability): http://ubuntuone.com/26WTaEz4PMGmiItGeSmBjZ vs menus in rhythmbox: http://ubuntuone.com/4Ib4thMLqohsle6J5KEvuI I've been searching for a solution to this problem for some time. The logical explanation is that all the GTK3 apps are working and anything that is still using GTK2 is not. If that's the case though, why did the same installation (Ubuntu Gnome 12.10) and the same theme (Adwaita) previously work with all those GTK2 apps? Desperate for help!

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >