Search Results

Search found 22625 results on 905 pages for 'must do better'.

Page 188/905 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • OpenWorld: Spotlight on Fusion CRM

    - by Tony Berk
    Oracle OpenWorld is less than 2 weeks away, so you need to start figuring out how you are going to maximize your week. I don't want to discourage you, but I'm pretty sure it is impossible to attend all 2000+ sessions. So you need to focus on what's important to you. Many of our CRM customers will be interested in Fusion CRM, since they have already started Fusion implementations or determining when to start. If that's you, or you are just looking for an overview of Fusion CRM, we've got you covered! Let's start at the top! For an overview of what is in Fusion CRM and where it is going, you should attend the general session and roadmap session: General Session: Oracle Fusion CRM—Improving Sales Effectiveness, Efficiency, and Ease of Use (Session ID: GEN9674) - Oct 2, 11:45 AM. Anthony Lye, Senior VP, Oracle leads this general session focused on Oracle Fusion CRM. Oracle Fusion CRM optimizes territories, combines quota management and incentive compensation, integrates sales and marketing, and cleanses and enriches data—all within a single application platform. Oracle Fusion can be configured, changed, and extended at runtime by end users, business managers, IT, and developers. Oracle Fusion CRM can be used from the Web, from a smartphone, from Microsoft Outlook, or from an iPad. Deloitte, sponsor of the CRM Track, will also present key concepts on CRM implementations. Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap (CON9407) - Oct 1, 3:15PM. In this session, learn how Oracle Fusion CRM enables companies to create better sales plans, generate more quality leads, and achieve higher win rates and find out why customers are adopting Oracle Fusion CRM. Gain a deeper understanding of the unique capabilities only Oracle Fusion CRM provides, and learn how Oracle’s commitment to CRM innovation is driving a wide range of future enhancements. There is also a General Session for all Fusion Applications providing insight into the current strategy of the full product line and a high-level roadmap for each product area: Oracle Fusion Applications—Overview, Strategy, and Roadmap (GEN9433) - Oct 1, 10:45AM. This session will be repeated on Oct 3, 10:15AM. Now, if you want to drill down into some more detail, there are a lot more sessions with Oracle product management and customers. I'll highlight a few, but suggest you review the Fusion CRM Focus On document, or the search in the Content Catalog or Session Builder.  Driving Sales Performance with Oracle Fusion CRM (CON9744) - Oct 3, 10:15AM. Demonstrates how sales executives can gain instant visibility into their business, deliver pervasive coaching to their reps, maximize their sales pipeline, and drive team alignment. The result is increased sales performance that enables sales executives to deliver more revenue without increasing their resources or expenses. Maximize Your Revenue Potential with Oracle Fusion CRM Sales Planning (CON9751) - Oct 2, 1:15PM. Learn how Oracle Fusion CRM helps companies intelligently optimize sales planning and manage sales performance including the ability to predict their future sales opportunities and use those predictions in conjunction with past sales data to optimally define their sales territories, sales quotas, and incentive compensation plans. Boost Marketing’s Contribution to Revenue with Oracle Fusion CRM Marketing (CON9746) - Oct 3, 11:45AM. Learn how Oracle Fusion CRM can help your organization integrate sales and marketing, using one CRM platform. See how Oracle Fusion CRM can help your organization learn where to invest its precious marketing dollars; drive more revenue with cross-channel marketing and prospecting capabilities, including and not limited to e-mail, Web, and social media; improve lead conversion with integrated lead management functionality; and do more with less by automating many manual tasks. Oracle Fusion CRM: Social Marketing (CON11559) - Oct 1, 3:15PM. Learn how Oracle’s acquisition of Collective Intellect, Vitrue, and Involver extends Oracle Fusion Marketing as a world-class social marketing solution. Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement (CON9750) - Oct 4, 11:15AM. Hear how Oracle can help you know your customers better, encourage brand affinity, and improve collaboration within your ecosystem. This session reviews Oracle's social media solution and shows how you can discover hidden insights buried in your enterprise and social data. Also learn how Oracle Social Network revolutionizes how enterprise users work, collaborate, and share to achieve successful outcomes. Of course, we recommend you hear from the current Fusion CRM customers too. So, don't miss Oracle Fusion Customer Relationship Management: Customer Adoption and Experiences (CON9415) on Oct 3 at 10:15AM for panel of customers discussing implementation experiences, best practices and benefits.  After listening to all of this great information, you are probably going to have questions. Well, the experts will be on hand to help answer your questions and plan how your organization can get going with Fusion CRM. Be sure to head down to the DEMOgrounds and CRM Pavilion in the Moscone West Exhibit Hall. And finally, there is the always popular Meet the Experts session focused on Fusion CRM (MTE9658) on Oct 2 at 5PM (pre-registration via Schedule Builder is recommended.) In addition, there are more sessions on Mobility, Extensibility, Incentive Compensation, Fusion Customer Hub and other key components of the Fusion Applications infrastructure, Oracle Cloud and much, much more! For a full list, utilize the Fusion CRM Focus On document and Content Catalog. Enjoy!

    Read the article

  • gpgpu vs. physX for physics simulation

    - by notabene
    Hello First theoretical question. What is better (faster)? Develop your own gpgpu techniques for physics simulation (cloth, fluids, colisions...) or to use PhysX? (If i say develop i mean implement existing algorithms like navier-strokes...) I don't care about what will take more time to develop. What will be faster for end user? As i understand that physx are accelerated through PPU units in gpu, does it mean that physical simulation can run in paralel with rastarization? Are PPUs different units than unified shader units used as vertex/geometry/pixel/gpgpu shader units? And little non-theoretical question: Is physx able to do sofisticated simulation equal to lets say Autodesk's Maya fluid solver? Are there any c++ gpu accelerated physics frameworks to try? (I am interested in both physx and gpgpu, commercial engines are ok too).

    Read the article

  • Plan Caching and Query Memory Part I – When not to use stored procedure or other plan caching mechanisms like sp_executesql or prepared statement

    - by sqlworkshops
      The most common performance mistake SQL Server developers make: SQL Server estimates memory requirement for queries at compilation time. This mechanism is fine for dynamic queries that need memory, but not for queries that cache the plan. With dynamic queries the plan is not reused for different set of parameters values / predicates and hence different amount of memory can be estimated based on different set of parameter values / predicates. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Sort with examples. It is recommended to read Plan Caching and Query Memory Part II after this article which covers Hash Match operations.   When the plan is cached by using stored procedure or other plan caching mechanisms like sp_executesql or prepared statement, SQL Server estimates memory requirement based on first set of execution parameters. Later when the same stored procedure is called with different set of parameter values, the same amount of memory is used to execute the stored procedure. This might lead to underestimation / overestimation of memory on plan reuse, overestimation of memory might not be a noticeable issue for Sort operations, but underestimation of memory will lead to spill over tempdb resulting in poor performance.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a stored procedure does not change significantly based on predicates.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts   Enough theory, let’s see an example where we sort initially 1 month of data and then use the stored procedure to sort 6 months of data.   Let’s create a stored procedure that sorts customers by name within certain date range.   --Example provided by www.sqlworkshops.com create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1)       end go Let’s execute the stored procedure initially with 1 month date range.   set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 48 ms to complete.     The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.       The estimated number of rows, 43199.9 is similar to actual number of rows 43200 and hence the memory estimation should be ok.       There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 679 ms to complete.      The stored procedure was granted 6656 KB based on 43199.9 rows being estimated.      The estimated number of rows, 43199.9 is way different from the actual number of rows 259200 because the estimation is based on the first set of parameter value supplied to the stored procedure which is 1 month in our case. This underestimation will lead to sort spill over tempdb, resulting in poor performance.      There was Sort Warnings in SQL Profiler.    To monitor the amount of data written and read from tempdb, one can execute select num_of_bytes_written, num_of_bytes_read from sys.dm_io_virtual_file_stats(2, NULL) before and after the stored procedure execution, for additional information refer to the webcast: www.sqlworkshops.com/webcasts.     Let’s recompile the stored procedure and then let’s first execute the stored procedure with 6 month date range.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts for further details.   exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go Now the stored procedure took only 294 ms instead of 679 ms.    The stored procedure was granted 26832 KB of memory.      The estimated number of rows, 259200 is similar to actual number of rows of 259200. Better performance of this stored procedure is due to better estimation of memory and avoiding sort spill over tempdb.      There was no Sort Warnings in SQL Profiler.       Now let’s execute the stored procedure with 1 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-31' go The stored procedure took 49 ms to complete, similar to our very first stored procedure execution.     This stored procedure was granted more memory (26832 KB) than necessary memory (6656 KB) based on 6 months of data estimation (259200 rows) instead of 1 month of data estimation (43199.9 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is 6 months in this case. This overestimation did not affect performance, but it might affect performance of other concurrent queries requiring memory and hence overestimation is not recommended. This overestimation might affect performance Hash Match operations, refer to article Plan Caching and Query Memory Part II for further details.    Let’s recompile the stored procedure and then let’s first execute the stored procedure with 2 day date range. exec sp_recompile CustomersByCreationDate go --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-02' go The stored procedure took 1 ms.      The stored procedure was granted 1024 KB based on 1440 rows being estimated.      There was no Sort Warnings in SQL Profiler.      Now let’s execute the stored procedure with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-06-30' go   The stored procedure took 955 ms to complete, way higher than 679 ms or 294ms we noticed before.      The stored procedure was granted 1024 KB based on 1440 rows being estimated. But we noticed in the past this stored procedure with 6 month date range needed 26832 KB of memory to execute optimally without spill over tempdb. This is clear underestimation of memory and the reason for the very poor performance.      There was Sort Warnings in SQL Profiler. Unlike before this was a Multiple pass sort instead of Single pass sort. This occurs when granted memory is too low.      Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined date range.   Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, recompile)       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.      The stored procedure with 1 month date range has good estimation like before.      The stored procedure with 6 month date range also has good estimation and memory grant like before because the query was recompiled with current set of parameter values.      The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.     Let’s recreate the stored procedure with optimize for hint of 6 month date range.   --Example provided by www.sqlworkshops.com drop proc CustomersByCreationDate go create proc CustomersByCreationDate @CreationDateFrom datetime, @CreationDateTo datetime as begin       declare @CustomerID int, @CustomerName varchar(48), @CreationDate datetime       select @CustomerName = c.CustomerName, @CreationDate = c.CreationDate from Customers c             where c.CreationDate between @CreationDateFrom and @CreationDateTo             order by c.CustomerName       option (maxdop 1, optimize for (@CreationDateFrom = '2001-01-01', @CreationDateTo ='2001-06-30'))       end go Let’s execute the stored procedure initially with 1 month date range and then with 6 month date range.   --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-01-30' exec CustomersByCreationDate '2001-01-01', '2001-06-30' go The stored procedure took 48ms and 291 ms in line with previous optimal execution times.    The stored procedure with 1 month date range has overestimation of rows and memory. This is because we provided hint to optimize for 6 months of data.      The stored procedure with 6 month date range has good estimation and memory grant because we provided hint to optimize for 6 months of data.       Let’s execute the stored procedure with 12 month date range using the currently cashed plan for 6 month date range. --Example provided by www.sqlworkshops.com exec CustomersByCreationDate '2001-01-01', '2001-12-31' go The stored procedure took 1138 ms to complete.      2592000 rows were estimated based on optimize for hint value for 6 month date range. Actual number of rows is 524160 due to 12 month date range.      The stored procedure was granted enough memory to sort 6 month date range and not 12 month date range, so there will be spill over tempdb.      There was Sort Warnings in SQL Profiler.      As we see above, optimize for hint cannot guarantee enough memory and optimal performance compared to recompile hint.   This article covers underestimation / overestimation of memory for Sort. Plan Caching and Query Memory Part II covers underestimation / overestimation for Hash Match operation. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case. I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.     Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.     Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • P2P synchronization: can a player update fields of other players?

    - by CherryQu
    I know that synchronization is a huge topic, so I have minimized the problem to this example case. Let's say, Alice and Bob are playing a P2P game, fighting against each other. If Alice hits Bob, how should I do the network component to make Bob's HP decrease? I can think of two approaches: Alice perform a Bob.HP--, then send Bob's reduced HP to Bob. Alice send a "I just hit Bob" signal to Bob. Bob checks it, and reduce its own HP, then send his new HP to everyone including Alice. I think the second approach is better because I don't think a player in a P2P game should be able to modify other players' private fields. Otherwise cheating would be too easy, right? My philosophy is that in a P2P game especially, a player's attributes and all attributes of its belonging objects should only be updated by the player himself. However, I can't prove that this is right. Could someone give me some evidence? Thanks :)

    Read the article

  • C++ : Lack of Standardization at the Binary Level

    - by Nawaz
    Why ISO/ANSI didn't standardize C++ at the binary level? There are many portability issues with C++, which is only because of lack of it's standardization at the binary level. Don Box writes, (quoting from his book Essential COM, chapter COM As A Better C++) C++ and Portability Once the decision is made to distribute a C++ class as a DLL, one is faced with one of the fundamental weaknesses of C++, that is, lack of standardization at the binary level. Although the ISO/ANSI C++ Draft Working Paper attempts to codify which programs will compile and what the semantic effects of running them will be, it makes no attempt to standardize the binary runtime model of C++. The first time this problem will become evident is when a client tries to link against the FastString DLL's import library from a C++ developement environment other than the one used to build the FastString DLL. Are there more benefits Or loss of this lack of binary standardization?

    Read the article

  • TechEd 2012 - last day

    - by Stefan Barrett
    Miss when TechEd was 5 days long!, it's Thursday already and we are on the last day. The snacks haven't appeared, but more developer sessions have. Having access to online schedule is very important, since the new sessions are usually the more interesting ones. On the whole, I think the wifi network has been worse this year - more blank spots, and more areas where performance is bad. I do think its funny that I get better reception on my iPad than my phones (iPad & Nokia/Microsoft). There seems to be less areas for people to plug in their own laptops this year - I do wonder, since more and more people have smart phones, and since most of the attendees are from America, perhaps they are not using the wifi - but rather their own phone provider. If I was in Japan, I would probably do the same. About to attend a session on F#, something which is probably going to be important for me over the next year.

    Read the article

  • SEOs: mobile version using AJAX: how to be properly read by SEOs?

    - by Olivier Pons
    Before anything else, I'd like to emphasize that I've already read this and this. Here's what I can do: Choice (1): create classical Web version with all products in that page - http://www.myweb.com. create mobile Web version with all products in the page and use jQuery Mobile to format all nicely. But this may be long to (load + format), and may provide bad user experience - http://m.myweb.com. Choice (2): create classical Web version with all products in that page create mobile Web version with almost nothing but a Web page showing "wait", then download all products in the page using AJAX and use jQuery Mobile to format all nicely. Showing a "wait, loading" message gives far more time to do whatever I want and may provide better user experience - http://m.myweb.com. Question: if I choose solution (2), google won't read anything on the mobile version (because all products will be downloaded in the page using AJAX), so it wont be properly read by SEOs. What / how shall I do?

    Read the article

  • Which GPU with my CPU for 1080p flash?

    - by oshirowanen
    Based on the following site: http://www.adobe.com/products/flashplayer/systemreqs/ I need the following minimum spec to play 1080p flash video via a browser: CPU: 1.8GHz Intel Core Duo, AMD Athlon 64 X2 4200+, or faster processor RAM: 512MB of RAM GPU: 64MB of graphics memory I only have a 2.8GHz Pentium 4 process which is no where near as good as the processor listed above. I don't want to upgrade my processor as I think it will mean I have to change the motherboard etc. So, my question is, what is the cheapest PCI-E GPU I can buy which will allow me to play smooth 1080p flash video via a browser. I think the cheapest I can get is the 8400GS, but am not sure if that will be able to handle 1080p with the processor I have. I have looked at the GT520 and was wondering if this is the cheapest GPU which I need, or if there is something cheaper which will do 1080p with a 2.8GHz Pentium 4. Or, will I have to get something better than a GT520?

    Read the article

  • Running TeamCity from Amazon EC2 - Cloud based scalable build and continuous Integration

    Ive been having fun playing with the amazon EC2 cloud service. I set up a server running TeamCity, and an image of a server that just runs a TeamCity agent. I also setup TeamCity  to automatically instantiate agents on EC2 and shut them down based upon availability of free agents. Heres how I did it: The first step was setting up the teamcity server. Create an account on amazon EC2 (BTW, amazons sites works better in IE than it does in chrome.. who knew!?) Open the EC2 dashboard, and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Running TeamCity from Amazon EC2 - Cloud based scalable build and continuous Integration

    Ive been having fun playing with the amazon EC2 cloud service. I set up a server running TeamCity, and an image of a server that just runs a TeamCity agent. I also setup TeamCity  to automatically instantiate agents on EC2 and shut them down based upon availability of free agents. Heres how I did it: The first step was setting up the teamcity server. Create an account on amazon EC2 (BTW, amazons sites works better in IE than it does in chrome.. who knew!?) Open the EC2 dashboard, and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Dual Boot Oracle Solaris 11/11 and Linux (Ubuntu 11.10/grub2)

    - by HartmutStreppel
    After having worked with Open Solaris on my laptop first, then with an upgrade to Oracle Solaris 11 Express, I finally did a fresh install of Oracle Solaris 11/11, when it became available. I am not a big fan of upgrades as I know that I am not the perfect administrator and my system gets spoiled with unclean configurations, outdated packages and wrong settings that cannot be reversed. So I prefer to start from scratch. Especially with Oracle Solaris 11 I wanted to have a system just like a customer would have it in production. The installation was smooth - more or less, if I had only read the documentation a bit better in advance. For a number of reasons I prefer a dual boot system. The most important one is, that especially with mobile devices you often run into network problems. And you have a hard time figuring out where the problem is: in your laptop hardware, in the OS you are running, or really within the network. If you have an alternate OS to boot, you can exclude the OS and your hardware. This makes you feel better. The second OS should be a Linux variant - and for some not so obvious reason I decided to go with the latest Ubuntu release (11.10). It replaced a very old Open Suse installation that had not been booted for a while. I knew that it was probably best to install Ubuntu first and then Oracle Solaris 11, as this would put the right boot information for Oracle Solaris  into the MBR and onto the root partition. But then, how to enable dual boot with the 2 OSes. Searching the web one mainly finds information about dual boot of: Linux and Linux Linux and Windows I do not want to explain which wrong configurations I worked through, but I prefer to explain the final setup, which is extremely simple, and I am wondering why this is not covered as the easiest solution for most dual boot setups. I use chainloader from and to both OS'es, with the only disadvantage that I have to confirm two grub menus each time I want to boot the "other" OS. Still there were some hurdles to jump over: Ubuntu did not like getting its boot blocks being placed on the partition instead of the disk; I must admit that I do not fully understand why. But using the --force option you could get that done Ubuntu needs an active partition; that was easy to achieve grub2 uses a different numbering scheme for the partitions. That is in the docs, if you read them. BTW: The usual disclaimer is valid. There is  no guarantee that what I describe works or works well. Please back up your data carefully before trying any of this. So, Oracle Solaris 11 is installed on the first partition and Ubuntu on the third. With Ubtuntu things initially were a bit more complicated, as I did not know how to boot it. And the live CD did not offer the capability to boot the on-disk image (at least I did not find it). So I booted the live CD, mounted the Ubuntu installation at /mnt and wrote the boot blocks into the partition. This is something that does not seem to be recommended, at least grub-install refrained from doing what I intended. After a bit more research I was bold enough to use the --force option and wrote the boot blocks to /dev/sda3 using grub-install --boot-directory=/mnt/boot --force --no-floppy /dev/sda3 So, I now had a system with the Solaris boot loader in the MBR, Solaris specific boot blocks on the Solaris root partition and Ubuntu specific boot blocks in the Ubuntu partition. I just had to chain them together and I was done. Oracle Solaris 11: I have added the following lines to /rpool/boot/grub/menu.lst (be aware of the /rpool!!!!) title Ubuntu 11.10root (hd0,2)makeactivechainloader +1boot The Ubuntu root file system sits on the third partition (/dev/sda3). Ubuntu: I have added the following lines to /etc/grub.d/40_custom: menuentry "Solaris 11/11" {      set root=(hd0,1)      chainloader +1} Two things need to be mentioned: a) grub2 starts numbering partitions with 1; so my /dev/sda1 is partition 1. b) Oracle Solaris boots without the partition being made active (btw: the command to make a partition active with grub2 is "parttool (hd0,1) boot+", which currently does not work for me). As debugging grub is a bit complicated, I used the grub CLI to perform some tests and also used a tool, that I found on sourceforge.net that was able to prepare a list of all boot loaders on all partitions. This told me that the basic setup was correct. Unfortunately I lost it in the live CD environment. I hope this is helpful for some of the readers.Hartmut

    Read the article

  • Computer Science or Computer Engineering for Data Science and Machine Learning

    - by ATMathew
    I'm a 25 year old data consultant who is considering returning to school to get a second bachelors degree in computer science or engineering. My interest is data science and machine learning. I use programming as a means to an end, and use languages like Python, R, C, Java, and Hadoop to find meaning in large data sets. Would a computer science or computer engineering degree be better for this? I realize that a statistics degree may be even more beneficial, but I'll be at a school which dosn't have a stats department or a computational math department.

    Read the article

  • The Ins and Outs of Effective Smart Grid Data Management

    - by caroline.yu
    Oracle Utilities and Accenture recently sponsored a one-hour Web cast entitled, "The Ins and Outs of Effective Smart Grid Data Management." Oracle and Accenture created this Web cast to help utilities better understand the types of data collected over smart grid networks and the issues associated with mapping out a coherent information management strategy. The Web cast also addressed important points that utilities must consider with the imminent flood of data that both present and next-generation smart grid components will generate. The three speakers, including Oracle Utilities' Brad Williams, focused on the key factors associated with taking the millions of data points captured in real time and implementing the strategies, frameworks and technologies that enable utilities to process, store, analyze, visualize, integrate, transport and transform data into the information required to deliver targeted business benefits. The Web cast replay is available here. The Web cast slides are available here.

    Read the article

  • Best way of Javascript web development in Netbeans (Hot deployment)

    - by marcelocbf
    I'm beginning Javascript development and as a beginner in JavaScript I make a lot of mistakes. The way I'm developing is very counter-productive because every mistake I fix I have to shutdown Glassfish, re-build the app and re-deploy it. My app is a Java back-end with REST services and the Html, JavaScript, CSS for the frontend. Everything is packed in a .ear file. As of right now, I'm just working with the frontend but I do have to make this whole process to update the files. My question is ... is there a better way of doing this? Can somebody tell how do you guys work in a similar setup to do the everyday development?

    Read the article

  • Can't get wireless on macbook pro 8,2

    - by Jeff
    I'm a linux Newb, and I have tried several of the fixes listed to try and get my wifi drivers to work, but to no avail. Does anyone here know why this isn't working for me, or better yet, how to fix it? Under lspci -vvv I get the following output: 03:00.0 Network controller: Broadcom Corporation BCM4331 802.11a/b/g/n (rev 02) Subsystem: Apple Inc. AirPort Extreme Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- Kernel modules: bcma With sudo lshw -class network I get this output: *-network UNCLAIMED description: Network controller product: BCM4331 802.11a/b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 version: 02 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:b0600000-b0603fff Any help would be greatly appreciated!

    Read the article

  • Mark Hurd on Oracle's Strategy to Be the Best

    - by Tuula Fai
    Mark Hurd, President of Oracle, energized a packed audience this Monday morning at OpenWorld with his keynote outlining Oracle’s four-pillar strategy: Be the leader at every level of the technology stack—applications, middleware, database, operating system, virtual machine, servers, and storage Vertically integrate these levels into differentiated solutions Offer Fusion, the next generation of applications, which are modular and can run in the cloud, on-premise, or both (hybrid) Deliver this technology portfolio through industry lenses to help Oracle customers solve their problems while innovating and becoming more efficient. Hurd’s message resonated throughout Monday’s Customer Experience (CX) sessions as we learned about Oracle’s investment in integrating its best-of-breed CX solutions to deliver an end-to-end suite that addresses every part of the customer lifecycle. For example, in the area of customer service, Oracle is developing enhancements to help contact center agents: Better understand customer needs through social listening tools that are integrated with knowledge management Empower themselves with internal collaboration and mobility tools Adapt to customer needs by engaging them through chat during a service or commerce interaction so they can deliver a great customer experience while transforming from a cost- into a profit center.

    Read the article

  • As a code monkey, how to discuss programming with a guy who almost has a doctorate in computer science

    - by Peter Turner
    A friend of my wife's is coming over for dinner tonight and he is a lot smarter than me. What do we have in common, well... A Bachelor's in Computer Science, and that should be enough of a conversation starter. But he's nearly completed his doctoral studies and is light years ahead of me in his particular area, which I find fascinating but don't have any legit reason to care about (except for maybe a better way through heavy traffic - he's a combinatorics guy specializing in that I think) and I got married and had some kids and am a professional programmer for medical records software. We've got a lot in common, but there's a point where neither of us care or understand each other - although I really want to learn from him and I'm not certain he'd even want to talk about his work. So for all you doctors or code monkeys, what's a good conversation starter!

    Read the article

  • Andriod 2.4 – IceCream – Comming…

    - by Boonei
    Take a deep breath before you read further, I am sure some of us are waiting for Gingerbread update in our mobile phones and most of us still are waiting even to get Froyo. Before we blink our eyes, you will be seeing the latest version of Android OS out soon, Android 2.4 aka Icecream. Its expected to be out in this June/July, so for sure we would get a spoon full on Google I/O comming this May. Stay Tuned……… Guess its better now a days to buy a “Pure-Android” phone. Else we would be waiting for ever to get the latest version on our phones. [Image Credit : D Sharon Pruitt, Available under Creative Commons] This article titled,Andriod 2.4 – IceCream – Comming…, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • What are the benefits and drawback of documentation vs tutorials vs video tutorials [closed]

    - by Cat
    Which types of learning resources do you find the most helpful, for which kinds of learning and/or perhaps at specific times? Some examples of types of learning you could consider: When starting to integrate a new SDK inside an existing codebase When learning a new framework without having to integrate legacy code When digging deeper into an already-used SDK that you may not know very well yet For example - (video) tutorials are usually very easy to follow and tells a story from beginning to end to get results, but will nearly always assume starting from scratch or a previous tutorial. Therefore such a resource is useful for quick learning if you don't have legacy code around, but less so if you have to search for the best-fit to the code you already have. SDK Documentation on the other hand is well-structured but does not tell a story. It is more difficult to get to a specific larger result with documentation alone, but it is a better fit when you do have legacy code around and are searching for perhaps non-obvious ways of employing the SDK or library. Are there other forms of resources that you find useful, such as interactive training?

    Read the article

  • Why do we keep using CSV?

    - by Stephen
    Why do we keep using CSV? I recently made a shift to working the health domain and despite the wonderful work in data transfer standards, all data transfer is in CSV, both for reporting to external organisations, and for data migrations when implementing new systems. Unfortunately the use of CSV is the cause of the endless repetition of the same stupid errors, with the same waste of developer time. (bad escaping, failing to handle null fields etc.) I know we can do better, and anything between JSON and XML (depending on the instance) would be fine. (Most of the time this is data going from one MS SQLserver 2005 to another!) I feel as if each time I see this happening I am literally watching one developer waste anothers time. So why do we keep shafting each other? When will we stop?

    Read the article

  • Sony Ericsson txt: workhorse feature phone with Java ME tech

    - by hinkmond
    Just like your basic Quarter Horse, the new Sony Ericsson txt feature phone might not be as fancy as a "thoroughbred" smartphone, but it can sure get the job done with Java ME technology. See: Sony Ericsson txt w/Java ME Here's a quote: ...comes with the usual features such as a web browser, email client and music player and FM radio, plus support for social networking applications and a YouTube client. You can download and install additional Java applications... Sometimes the simple workhorse feature phone (with Java ME) is much better to go with than the idiosyncratic thoroughbred smartphone. Hinkmond

    Read the article

  • Join us for Live Oracle Linux and Oracle VM Cloud Events in Europe

    - by Monica Kumar
    Join us for a series of live events and discover how Oracle VM and Oracle Linux offer an integrated and optimized infrastructure for quickly deploying a private cloud environment at lower cost. As one of the most widely deployed operating systems today, Oracle Linux delivers higher performance, better reliability, and stability, at a lower cost for your cloud environments. Oracle VM is an application-driven server virtualization solution fully integrated and certified with Oracle applications to deliver rapid application deployment and simplified management. With Oracle VM, you have peace of mind that the entire Oracle stack deployed is fully certified by Oracle. Register now for any of the upcoming events, and meet with Oracle experts to discuss how we can help in enabling your private cloud. Nov 20: Foundation for the Cloud: Oracle Linux and Oracle VM (Belgium) Nov 21: Oracle Linux & Oracle VM Enabling Private Cloud (Germany) Nov 28: Realize Substantial Savings and Increased Efficiency with Oracle Linux and Oracle VM (Luxembourg) Nov 29: Foundation for the Cloud: Oracle Linux and Oracle VM (Netherlands) Dec 5: MySQL Tech Tour, including Oracle Linux and Oracle VM (France) Hope to see you at one of these events!

    Read the article

  • Should we rename overloaded methods?

    - by Mik378
    Assume an interface containing these methods : Car find(long id); List<Car> find(String model); Is it better to rename them like this? Car findById(long id); List findByModel(String model); Indeed, any developer who use this API won't need to look at the interface for knowing possible arguments of initial find() methods. So my question is more general : What is the benefit of using overloaded methods in code since it reduce readability?

    Read the article

  • 2010 Goals Review

    - by andyleonard
    Introduction Earlier this year, I responded to Tim Ford's ( Blog / Twitter ) tag (in a post about 2010 Resolutions and Themeword ) with 2010 Themeword and Goals . Let's see how I did. Resolutions 1. I need to take better care of Andy. On this, I failed. I took even worse care of myself than before. I have to address this in 2011. You can help by pinging me on Twitter ( @AndyLeonard ) every day in 2011 and ask me if I've exercised today. 2. I want to continue to serve the SQL Server community in several...(read more)

    Read the article

  • Advices and strategies for browser compatibility on web applications into a corporate environment

    - by TiagoBrenck
    With the new CSS 3 and HTML 5 tecnology, the web applications gained a lot of new tools for a better UI(user interface) interaction, beautifull templates and even responsive layout to fit into tablets and smartphones. Within a corporate environment, those new tecnologies are required so the company can "follow" the IT evolution and their concurrent, but they also want that those new web applications supports old browsers. How to deal with this situation? By one side we are asked to follow the news and IT evolutions, create responsive layouts and use a lot of cool jQuery's plugins. And by the other side, we are asked to support old browsers that doesn't support those new responsive features, plugins or components. I would like advices and strategies to create "modern" web applications that are also supported on old browsers. How does your company deal with this situation? Is it possible to have the same web application running good and beauty on old browsers, and responsive, interactive on actual browsers?

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >