Search Results

Search found 366 results on 15 pages for 'eat at joes'.

Page 13/15 | < Previous Page | 9 10 11 12 13 14 15  | Next Page >

  • Good guidelines for developing an ecommerce application

    - by kaciula
    I'm making an ecommerce application on Android and, as it is my first serious project, I'm trying to figure out beforehand the best approach to do this. The application talks to a web service (magento api, meaning soap or xml rpc unfortunately) and gets all the content on the phone (product categories, product details, user credentials etc.). I think it should do lazy loading or something like that. So, I was thinking to keep the user credentials in a custom Object which will be kept in a SharedPreferences so that every Activity cand easily access it. I'll use a couple of ListViews to show the content and AsyncTask to fetch the data needed. Should I keep all the data in memory in objects or should I use some sort of cache or a local database? Also, I'm planning to use a HashMap with SoftReferences to hold the bitmaps I am downloading. But wouldn't this eat a lot of memory? How cand all the activities have access to all these objects (ecommerce basket etc.)? I'm thinking of passing them using Intents but this doesn't seem right to me. Can SharedPreferences be used for a lot of objects and are there any concurrency issues? Any pointers would be really appreciated. What are some good guidelines? What classes should I look into? Do you know of any resources on the Internet for me to check out?

    Read the article

  • C++ interpreter conceptual problem

    - by Jan Wilkins
    I've built an interpreter in C++ for a language created by me. One main problem in the design was that I had two different types in the language: number and string. So I have to pass around a struct like: class myInterpreterValue { myInterpreterType type; int intValue; string strValue; } Objects of this class are passed around million times a second during e.g.: a countdown loop in my language. Profiling pointed out: 85% of the performance is eaten by the allocation function of the string template. This is pretty clear to me: My interpreter has bad design and doesn't use pointers enough. Yet, I don't have an option: I can't use pointers in most cases as I just have to make copies. How to do something against this? Is a class like this a better idea? vector<string> strTable; vector<int> intTable; class myInterpreterValue { myInterpreterType type; int locationInTable; } So the class only knows what type it represents and the position in the table This however again has disadvantages: I'd have to add temporary values to the string/int vector table and then remove them again, this would eat a lot of performance again. Help, how do interpreters of languages like Python or Ruby do that? They somehow need a struct that represents a value in the language like something that can either be int or string.

    Read the article

  • In Inform 7, is it possible to use a second noun construct with "pull"?

    - by Beska
    I'll eat my hat if I get a good answer to this...I suspect that although I'm a rank beginner in Inform 7, and I'm guessing this isn't that hard, but there are probably not many people here who are familiar with Inform 7. Still, nothing ventured... I'm trying to create a custom response to a "pull" action. Unfortunately, I think the "pull" action doesn't normally expect a second noun. So I'm trying something like this: The nails are some things in the Foyer. The nails are scenery. Instead of pulling the nails: If the second noun is nothing: say "How? Are you going to pull the nails with your teeth?"; otherwise: say "I don't think that's going to do the job." But while this compiles, and the first part works, the "I don't think..." section is never called...the interpreter just responds "I only understood you as far as wanting to pull the nails." Do I have to create my own custom action for this? Overwrite the standard pull action? Am I missing something simple that will allow me to get this to work?

    Read the article

  • Class unloading in java

    - by java_geek
    When a classloader is garbage collected, are the classes loaded by it unloaded? When the JVM is running is verbose mode, all the loaded classes are o/p. Similarly will the JVM log when it unloads a class? I wrote a custom class loader to test this, but could not see any verbose log for unloading of the classes. CustomClassLoader loader = new CustomClassLoader(new URL[]{}, CustomClassLoader.class.getClassLoader()); loader.addURL("D:\workspace\ClassLoaderTest\implementation.jar"); Class c = null; try { c = Class.forName("Horse",false,loader); if (c != null) { try { Animal animal = (Animal)c.newInstance(); animal.eat(); } catch(Exception ex) { ex.printStackTrace(); } } } catch(Exception e) { e.printStackTrace(); } loader = null; byte[] b = new byte[58*1024*1024]; System.gc(); ClassLoadingMXBean clBean = ManagementFactory.getClassLoadingMXBean(); System.out.println("Number of classes currently loaded " + clBean.getLoadedClassCount()); System.out.println("Number of classes loaded totally " + clBean.getTotalLoadedClassCount()); System.out.println("Number of classes unloaded " + clBean.getUnloadedClassCount()); Even the ClassLoadingMXBean gives number of unloaded classes as 0. How can i know that a class is unloaded when the class loader is GCed?

    Read the article

  • How to hide the console of batch scripts without losing std err/out streams

    - by cooper.thompson
    My question is similar to Running a CMD or BAT in silent mode, but with one additional constraint. If you use WshScript.Run in vbscript, you lose access to the standard in/error/out streams of the process. WshScript.Exec gives you access to the standard streams, but you can't hide your windows. How can you have your cake (hide the windows) and eat it too (have direct access to the console streams)? I'm currently thinking about a C++ executable which creates a new Windows Station and Desktop, (see MSDN) and runs a specified script within that new Desktop (I'm not yet an expert on Window Stations and Desktops, so this idea may be retarded). This idea is based loosely on Condor's USE_VISIBLE_DESKTOP feature, which, if disabled, runs Condor jobs in a non-visible Desktop. I haven't quite figured out if this requires elevated priveledge. The tradeoff of this approach is that your script can disappear into limbo if it blocks on user input. Does anyone have any additional ideas? Or feedback on the approach outlined above? Edit: Also, the purpose of our script is to set up the user environment, so running as another user, or as a system scheduled task isn't really an option (unless there are clever tricks I don't know about).

    Read the article

  • How to make a program not show up in Alt-Tab or on the taskbar.

    - by Scott Chamberlain
    I have a program that needs to sit in the background and when a user connects to a RDP session it will do some stuff then launch a program. when the program is closed it will do some housekeeping and logoff the session. The current way I am doing it is like this I have the terminal server launch this application. I have it set as a windows forms application and my code is this public static void Main() { //Do some setup work Process proc = new Process(); //setup the process proc.Start(); proc.WaitForExit(); //Do some housecleaning NativeMethods.ExitWindowsEx(0, 0); } I really like this because there is no item in the taskbar and there is nothing showing up in alt-tab. However to do this I gave up access to functions like void WndProc(ref Message m) So Now I can't listen to windows messages (Like WTS_REMOTE_DISCONNECT or WTS_SESSION_LOGOFF) and do not have a handle to use for for bool WTSRegisterSessionNotification(IntPtr hWnd, int dwFlags); I would like my code to be more robust so it will do the housecleaning if the user logs off or disconnects from the session before he closes the program. Any reccomendations on how I can have my cake and eat it too?

    Read the article

  • Setting up SVN (subvsersion) to manage our companies files, how to exclude large files from being ve

    - by Roeland
    Me and two other guys recently started our own web development company. We each work from our homes and have decided we want to keep one central location for all of our files. These files include word documents, spreadsheets, client files, designs.. etc. Anything pertaining to our company. I have a pretty solid internet connection and a windows 2008 server box sitting at home so I set up a subversion repository. Our file repository will look something like this. Clients Company A Design (photoshop files, wireframes, concepts) Documents ( logins, quotes, proposals etc) Site Backups Company B Design Documents Site Backups Prospects Company C Company D Our Company Our Website Documents (contract, operating procudres) My question is in regards to design files. The photoshop files that my designer works with range in sizes from 10mb to 100mb. I don't think we need to keep these files version-ed as this would eat up space incredibly fast. How do I go about controlling which files get version-ed, and which files are just stored. What I am thinking is that all documents need to be version-ed, and any files other then that should not be. Any help would be appreciated, thanks! Edit I am also curious whether this is the way to go. I just like this system since it keeps version of all my documents and at the same time. Also essentially I will have 3 backups in 3 different locations (3 local copies) so no need for backing it up. I am unsure of how svn would perform as purely a huge file repository.

    Read the article

  • grails question (sample 1 of Grails To Action book) problem with Controller and Service

    - by fegloff
    Hi, I'm doing Grails To Action sample for chapter one. Every was just fine until I started to work with Services. When I run the app I have the following error: groovy.lang.MissingPropertyException: No such property: quoteService for class: qotd.QuoteController at qotd.QuoteController$_closure3.doCall(QuoteController.groovy:14) at qotd.QuoteController$_closure3.doCall(QuoteController.groovy) at java.lang.Thread.run(Thread.java:619) Here is my groovie QuoteService class, which has an error within the definition of GetStaticQuote (ERROR: Groovy:unable to resolve class Quote) package qotd class QuoteService { boolean transactional = false def getRandomQuote() { def allQuotes = Quote.list() def randomQuote = null if (allQuotes.size() > 0) { def randomIdx = new Random().nextInt(allQuotes.size()) randomQuote = allQuotes[randomIdx] } else { randomQuote = getStaticQuote() } return randomQuote } def getStaticQuote() { return new Quote(author: "Anonymous",content: "Real Programmers Don't eat quiche") } } Controller groovie class package qotd class QuoteController { def index = { redirect(action: random) } def home = { render "<h1>Real Programmers do not each quiche!</h1>" } def random = { def randomQuote = quoteService.getRandomQuote() [ quote : randomQuote ] } def ajaxRandom = { def randomQuote = quoteService.getRandomQuote() render "<q>${randomQuote.content}</q>" + "<p>${randomQuote.author}</p>" } } Quote Class: package qotd class Quote { String content String author Date created = new Date() static constraints = { author(blank:false) content(maxSize:1000, blank:false) } } I'm doing the samples using Eclipse with grails addin. Any advice? Regards, Francisco

    Read the article

  • Simulate stochastic bipartite network based on trait values of species - in R

    - by Scott Chamberlain
    I would like to create bipartite networks in R. For example, if you have a data.frame of two types of species (that can only interact across species, not within species), and each species has a trait value (e.g., size of mouth in the predator allows who gets to eat which prey species), how do we simulate a network based on the traits of the species (that is, two species can only interact if their traits overlap in values for instance)? UPDATE: Here is a minimal example of what I am trying to do. 1) create phylogenetic tree; 2) simulate traits on the phylogeny; 3) create networks based on species trait values. # packages install.packages(c("ape","phytools")) library(ape); library(phytools) # Make phylogenetic trees tree_predator <- rcoal(10) tree_prey <- rcoal(10) # Simulate traits on each tree trait_predator <- fastBM(tree_predator) trait_prey <- fastBM(tree_prey) # Create network of predator and prey ## This is the part I can't do yet. I want to create bipartite networks, where ## predator and prey interact based on certain crriteria. For example, predator ## species A and prey species B only interact if their body size ratio is ## greater than X.

    Read the article

  • What is the correct way to open and close window/dialog?

    - by mree
    I'm trying to develop a new program. The work flow looks like this: Login --> Dashboard (Window with menus) --> Module 1 --> Module 2 --> Module 3 --> Module XXX So, to open Dashboard from Login (a Dialog), I use Dashboard *d = new Dashboard(); d->show(); close(); In Dashboard, I use these codes to reopen the Login if the user closes the Window (by clicking the 'X') closeEvent(QCloseEvent *) { Login *login = new Login(); login->show(); } With a Task Manager opened, I ran the program and monitor the memory usage. After clicking open Dashboard from Login and closing Dashboard to return to Login, I noticed that the memory keeps increasing about 500 KB. It can goes up to 20 MB from 12 MB of memory usage by just opening and closing the window/dialog. So, what did I do wrong here ? I need to know it before I continue developing those modules which will definitely eat more memory with my programming. Thanks in advance.

    Read the article

  • Sheet and thread memory problem

    - by Xident
    Hi Guys, recently I started a project which can export some precalculated Grafix/Audio to files, for after processing. All I was doing is to put a new Window (with progressindicator and an Abort Button) in my main xib and opened it using the following code: [NSApp beginSheet: REC_Sheet modalForWindow: MOTHER_WINDOW modalDelegate: self didEndSelector: nil contextInfo: nil]; NSModalSession session=[NSApp beginModalSessionForWindow:REC_Sheet]; RECISNOTDONE=YES; while (RECISNOTDONE) { if ([NSApp runModalSession:session]!=NSRunContinuesResponse) break; usleep(100); } [NSApp endModalSession:session]; A Background Thread (pthread) was started earlier, to actually perform the work and save all the targas/wave file. Which worked great, but after an amount of time, it turned out that the main thread was not responding anymore and my memory footprint raised unstoppable. I tried to debug it with Instruments, and saw a lot of CFHash etc stuff growing to infinity. By accident i clicked below the sheet, and temporary it helped, the main thread (AppKit ?) was releasing it's stuff, but just for a little time. I can't explain it to me, first of all I thought it was the access from my thread to the Progressbar to update the Progress (intervalled at 0,5sec), so I cut it out. But even if I'm not updating anything and did nothing with the Progressbar, my Application eat up all the Memory, because of not releasing it's "Main-Event" or whatsoever Stuff. Is there any possibility to "drain" this Main thread Memory stuff (Runloop / NSApp call?). And why the heck doesn't the Main thread respond anymore (after this simple task) ??? I don't have a clou anymore, please help ! Thanks in advance ! P.S. How do you guys implement "threaded long task" Stuff and updating your gui ???

    Read the article

  • jQuery : how to apply effect on a child element

    - by Tristan
    Hello, instead of re-writting the same function, i want to optimise my code : <div class="header"> <h3>How to use the widget</h3> <span id="idwidget" ></span> </div> <div class="content" id="widget"> the JS : <script type="text/javascript"> $(document).ready(function() { var showText="Show"; var hideText="Hide"; $("#idwidget").before("<a href='#' class='button' id='toggle_link'>"+showText+"</a>"); $('#widget').hide(); $('a#toggle_link').click(function() { if ($('a#toggle_link').text()==showText) { $('a#toggle_link').text(hideText); } else { $('a#toggle_link').text(showText); } $('#widget').toggle('slow'); return false; }); }); This is working just with the div which is called widget and the button called idwidget. But on this page i have also : <div class="header"> <h3>How to eat APPLES</h3> <span id="IDsomethingelse" ></span> </div> <div class="content" id="somethingelse"> And i want it to be compatible with the code. I heard about children attribute, do you have an idea how to do that please ? Thank you

    Read the article

  • How to debug properly and find causes for crashes?

    - by Newbie
    I dont know what to do anymore... its hopeless. I'm getting tired of guessing whats causing the crashes. Recently i noticed some opengl calls crashes programs randomly on some gfx cards. so i am getting really paranoid what can cause crashes now. The bad thing on this crash is that it crashes only after a long time of using the program, so i can only guess what is the problem. I cant remember what changes i made to the program that may cause the crashes, its been so long time. But luckily the previous version doesnt crash, so i could just copypaste some code and waste 10 hours to see at which point it starts crashing... i dont think i want to do that yet. The program crashes after i make it to process the same files about 5 times in a row, each time it uses about 200 megabytes of memory in the process. It crashes at random times while and after the reading process. I have createn a "safe" free() function, it checks the pointer if its not NULL, and then frees the memory, and then sets the pointer to NULL. Isn't this how it should be done? I watched the task manager memory usage, and just before it crashed it started to eat 2 times more memory than usual. Also the program loading became exponentially slower every time i loaded the files; first few loads didnt seem much slower from each other, but then it started rapidly doubling the load speeds. What should this tell me about the crash? Also, do i have to manually free the c++ vectors by using clear() ? Or are they freed after usage automatically, for example if i allocate vector inside a function, will it be freed every time the function has ended ? I am not storing pointers in the vector. -- Shortly: i want to learn to catch the damn bugs as fast as possible, how do i do that? Using Visual Studio 2008.

    Read the article

  • Live clock javascript starts from my custom time

    - by newworroo
    I was trying to create a live/dynamic clock is based on my custom time instead of system time. There are many scripts, but I couldn't find the clock starts from my custom time. Here is an example that I'm trying to modify. The problem is the seconds doesn't change, and it looks like I need to use ajax. Is there any way to do it without ajax? If not, help me to do it using ajax!!! The reason I don't like ajax method is that another page should be called and refreshed, so it will eat server ram. ex) http://www.javascriptkit.com/script/cut2.shtml Before: <script> function show(){ var Digital=new Date() var hours=Digital.getHours() var minutes=Digital.getMinutes() var seconds=Digital.getSeconds() ... ... After: <script> function show(){ var Digital=new Date() var hours=<?php echo $hr; ?>; var minutes=<?php echo $min; ?>; var seconds=<?php echo $sec; ?>; ... ...

    Read the article

  • SQL SERVER – Developer Training Resources and Summary Roundup

    - by pinaldave
    It is always pleasure for any author when other renowned authors in the industry write about you. Earlier I wrote a five part blog series on Developer Training and I have received a phenomenal response to the series. I have received plenty of comments, questions and feedback. I thought it would be nice to sum up the whole series as well answer a few of the questions received. Quick Recap Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. There are companies that pay for 100% of the expenses and there are employees who opt for training on their own expense during their personal time. Training is often looked at as vacation by employee and employers and we need to change this mind-set. One of the ways is to report back the learning to your manager and implement newly learned knowledge in day-to-day work. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. The manager often feels pressure to accommodate every single employee for training even though his training budget is limited. It is indeed the responsibility of the developer to get maximum advantage from the training. Training immediately helps organizations but stays as a part of an employee’s knowledge forever. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. The common element between both the methods is “learning material”. Learning material can be of any format – videos, books, paper notes or just a plain black board. Instructor-led training is a very effective mode but not possible every single time. During the course of the developer’s career, one has to learn lots of new technology and it is almost impossible to have a quality trainer available on that subject at that time. Books are most effective and proven methods, however, it always helps if someone explains the concepts of the book with a demonstration. In recent times I have started to believe in online trainings which leads to a hybrid experience. Online trainings take the best part of the books and the best part of the instructor-led training and gives effective training in a matter of hours. Developer Training – A Conclusive Summary- Part 5 In this part, I shared what I was continuously thinking about developer training. There is no better teacher than oneself. There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. Once someone overcomes them, life is much easier. I believe that proper and appropriate high quality training can help to address the burning issues. Opinion of Friends I invited a few of my friends to express their opinion about developer training and here are their opinions. I am listing them here in the order of the blog post publishing date. Nakul Vachhrajani - Developer Trainings-Importance, Benefits, Tips and follow-up Nakul’s sums of many of the concepts which are complementary to my blog posts. Nakul addresses the burning question of developer training with different angles. I am personally very impressed by his following statement - “Being skilled does not mean having just a stack of certifications, but it also means having an understanding about the internals of the products that you are working on – and using that knowledge to improve the efficiency & productivity at the workplace in turn resulting in better products, better consulting abilities and a happier self.” Nakul also suggests the online training options of Pluralsight. Vinod Kumar - Training–a necessity or bonus Vinod Kumar comes up with excellent follow up on developer training. Vinod is known for his inspirational writing about SQL Server. Vinod starts with a story of a student who is extremely eager to learn the wisdom of life from a monk but the monk does not accept him as a disciple for a long time. The conversation between student and monk is indeed an essence of all learning. We all want to learn quickly and be successful but the most important thing in life is to have the right attitude towards learning and more so towards life. The blog post end with a very important thought about how to avoid the famous excuse – “I don’t have enough time.” Ritesh Shah - Training – useful or useless? Ritesh brings up very important concept related to training. Ritesh in his meticulous style explains why training is an important and lifelong process. Training must not stop at any age but should continue forever. The moment training stops, progress stops along with. Paras Doshi - Professional Development Resource Paras is known for his to–the-point writing, and has summarized the five part series very precisely. He read the five part series and created a digest summary of the blog post. If you are in a rush and have no time to read my five series – I suggest you read his blog post. Training Resources I am often asked what the best resources for learning new technology are. This is the most difficult question EVER. There are plenty of good training resources available. When it is about training our needs are different, our preference of learning is different and we all have an opinion. Additionally, we all are located in different geographic locations worldwide and there is no way one solution will fit all. However, let me list a few of the training resources which I have built so far and you can consume them if you find it relevant to your need. SQL Server Books SQL Server Interview Questions and Answers SQL Wait Stats SQL Programming Joes 2 Pros SQL Server Video Tutorials SQL Server Questions and Answers SQL Server Performance: Indexing Basics SQL Server Performance: Introduction to Query Tuning SQL in Sixty Seconds Series of Sixty Seconds Learning Video on YouTube Trust me worldwide web is very big and there are plenty of high quality learning materials available worldwide – trainer-led as well online. I suggest you explore various options and make the best choice for yourself. Remember, training is your personal journey and it should never stop. Are you ready? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – A Quick Look at Logging and Ideas around Logging

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post on Logging. When someone talks about logging, personally I get lots of ideas about it. I have seen logging as a very generic term. Let me ask you this question first before I continue writing about logging. What is the first thing comes to your mind when you hear word “Logging”? Now ask the same question to the guy standing next to you. I am pretty confident that you will get  a different answer from different people. I decided to do this activity and asked 5 SQL Server person the same question. Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Output Clause The very first person replied output clause. Pretty interesting answer to start with. I see what exactly he was thinking. SQL Server 2005 has introduced a new OUTPUT clause. OUTPUT clause has access to inserted and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. Here are some references for Output Clause: OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE Reasons for Using Output Clause – Quiz Tips from the SQL Joes 2 Pros Development Series – Output Clause in Simple Examples Error Logs I was expecting someone to mention Error logs when it is about logging. The error log is the most looked place when there is any error either with the application or there is an error with the operating system. I have kept the policy to check my server’s error log every day. The reason is simple – enough time in my career I have figured out that when I am looking at error logs I find something which I was not expecting. There are cases, when I noticed errors in the error log and I fixed them before end user notices it. Other common practices I always tell my DBA friends to do is that when any error happens they should find relevant entries in the error logs and document the same. It is quite possible that they will see the same error in the error log  and able to fix the error based on the knowledge base which they have created. There can be many different kinds of error log files exists in SQL Server as well – 1) SQL Server Error Logs 2) Windows Event Log 3) SQL Server Agent Log 4) SQL Server Profile Log 5) SQL Server Setup Log etc. Here are some references for Error Logs: Recycle Error Log – Create New Log file without Server Restart SQL Error Messages Change Data Capture I got surprised with this answer. I think more than the answer I was surprised by the person who had answered me this one. I always thought he was expert in HTML, JavaScript but I guess, one should never assume about others. Indeed one of the cool logging feature is Change Data Capture. Change Data Capture records INSERTs, UPDATEs, and DELETEs applied to SQL Server tables, and makes a record available of what changed, where, and when, in simple relational ‘change tables’ rather than in an esoteric chopped salad of XML. These change tables contain columns that reflect the column structure of the source table you have chosen to track, along with the metadata needed to understand the changes that have been made. Here are some references for Change Data Capture: Introduction to Change Data Capture (CDC) in SQL Server 2008 Tuning the Performance of Change Data Capture in SQL Server 2008 Download Script of Change Data Capture (CDC) CDC and TRUNCATE – Cannot truncate table because it is published for replication or enabled for Change Data Capture Dynamic Management View (DMV) I like this answer. If asked I would have not come up with DMV right away but in the spirit of the original question, I think DMV does log the data. DMV logs or stores or records the various data and activity on the SQL Server. Dynamic management views return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. One can get plethero of information from DMVs – High Availability Status, Query Executions Details, SQL Server Resources Status etc. Here are some references for Dynamic Management View (DMV): SQL SERVER – Denali – DMV Enhancement – sys.dm_exec_query_stats – New Columns DMV – sys.dm_os_windows_info – Information about Operating System DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28 DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module Transaction Log Impact Detection Using DMV – dm_tran_database_transactions Log Files I almost flipped with this final answer from my friend. This should be probably the first answer. Yes, indeed log file logs the SQL Server activities. One can write infinite things about log file. SQL Server uses log file with the extension .ldf to manage transactions and maintain database integrity. Log file ensures that valid data is written out to database and system is in a consistent state. Log files are extremely useful in case of the database failures as with the help of full backup file database can be brought in the desired state (point in time recovery is also possible). SQL Server database has three recovery models – 1) Simple, 2) Full and 3) Bulk Logged. Each of the model uses the .ldf file for performing various activities. It is very important to take the backup of the log files (along with full backup) as one never knows when backup of the log file come into the action and save the day! How to Stop Growing Log File Too Big Reduce the Virtual Log Files (VLFs) from LDF file Log File Growing for Model Database – model Database Log File Grew Too Big master Database Log File Grew Too Big SHRINKFILE and TRUNCATE Log File in SQL Server 2008 Can I just say I loved this month’s T-SQL Tuesday Question. It really provoked very interesting conversation around me. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Data Sources and Data Sets in Reporting Services SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This example is from the Beginning SSRS. Supporting files are available with a free download from the www.Joes2Pros.com web site. Connecting to Your Data? When I was a child, the telephone book was an important part of my life. Maybe I was just a nerd, but I enjoyed getting a new book every year to page through to learn about the businesses in my small town or to discover where some of my school acquaintances lived. It was also the source of maps to my town’s neighborhoods and the towns that surrounded me. To make a phone call, I would need a telephone number. In order to find a telephone number, I had to know how to use the telephone book. That seems pretty simple, but it resembles connecting to any data. You have to know where the data is and how to interact with it. A data source is the connection information that the report uses to connect to the database. You have two choices when creating a data source, whether to embed it in the report or to make it a shared resource usable by many reports. Data Sources and Data Sets A few basic terms will make the upcoming choses make more sense. What database on what server do you want to connect to? It would be better to just ask… “what is your data source?” The connection you need to make to get your reports data is called a data source. If you connected to a data source (like the JProCo database) there may be hundreds of tables. You probably only want data from just a few tables. This means you want to write a specific query against this data source. A query on a data source to get just the records you need for an SSRS report is called a Data Set. Creating a local Data Source You can connect embed a connection from your report directly to your JProCo database which (let’s say) is installed on a server named Reno. If you move JProCo to a new server named Tampa then you need to update the Data Set. If you have 10 reports in one project that were all pointing to the JProCo database on the Reno server then they would all need to be updated at once. It’s possible to make a project level Data Source and have each report use that. This means one change can fix all 10 reports at once. This would be called a Shared Data Source. Creating a Shared Data Source The best advice I can give you is to create shared data sources. The reason I recommend this is that if a database moves to a new server you will have just one place in Report Manager to make the server name change. That one change will update the connection information in all the reports that use that data source. To get started, you will start with a fresh project. Go to Start > All Programs > SQL Server 2012 > Microsoft SQL Server Data Tools to launch SSDT. Once SSDT is running, click New Project to create a new project. Once the New Project dialog box appears, fill in the form, as shown in. Be sure to select Report Server Project this time – not the wizard. Click OK to dismiss the New Project dialog box. You should now have an empty project, as shown in the Solution Explorer. A report is meant to show you data. Where is the data? The first task is to create a Shared Data Source. Right-click on the Shared Data Sources folder and choose Add New Data Source. The Shared Data Source Properties dialog box will launch where you can fill in a name for the data source. By default, it is named DataSource1. The best practice is to give the data source a more meaningful name. It is possible that you will have projects with more than one data source and, by naming them, you can tell one from another. Type the name JProCo for the data source name and click the Edit button to configure the database connection properties. If you take a look at the types of data sources you can choose, you will see that SSRS works with many data platforms including Oracle, XML, and Teradata. Make sure SQL Server is selected before continuing. For this post, I am assuming that you are using a local SQL Server and that you can use your Windows account to log in to the SQL Server. If, for some reason you must use SQL Server Authentication, choose that option and fill in your SQL Server account credentials. Otherwise, just accept Windows Authentication. If your database server was installed locally and with the default instance, just type in Localhost for the Server name. Select the JProCo database from the database list. At this point, the connection properties should look like. If you have installed a named instance of SQL Server, you will have to specify the server name like this: Localhost\InstanceName, replacing the InstanceName with whatever your instance name is. If you are not sure about the named instance, launch the SQL Server Configuration Manager found at Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools. If you have a named instance, the name will be shown in parentheses. A default instance of SQL Server will display MSSQLSERVER; a named instance will display the name chosen during installation. Once you get the connection properties filled in, click OK to dismiss the Connection Properties dialog box and OK again to dismiss the Shared Data Source properties. You now have a data source in the Solution Explorer. What’s next I really need to thank Kathi Kellenberger and Rick Morelan for sharing this material for this 5 day series of posts on SSRS. To get really comfortable with SSRS you will get to know the different SSDT windows, Build reports on your own (without the wizards),  Add report headers and footers, Accept user input,  create levels, charts, or even maps for visual appeal. You might be surprise to know a small 230 page book starts from the very beginning and covers the steps to do all these items. Beginning SSRS 2012 is a small easy to follow book so you can learn SSRS for less than $20. See Joes2Pros.com for more on this and other books. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • SQL SERVER – Create a Very First Report with the Report Wizard

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. What is the report Wizard? In today’s world automation is all around you. Henry Ford began building his Model T automobiles on a moving assembly line a century ago and changed the world. The moving assembly line allowed Ford to build identical cars quickly and cheaply. Henry Ford said in his autobiography “Any customer can have a car painted any color that he wants so long as it is black.” Today you can buy a car straight from the factory with your choice of several colors and with many options like back up cameras, built-in navigation systems and heated leather seats. The assembly lines now use robots to perform some tasks along with human workers. When you order your new car, if you want something special, not offered by the manufacturer, you will have to find a way to add it later. In computer software, we also have “assembly lines” called wizards. A wizard will ask you a series of questions, often branching to specific questions based on earlier answers, until you get to the end of the wizard. These wizards are used for many things, from something simple like setting up a rule in Outlook to performing administrative tasks on a server. Often, a wizard will get you part of the way to the end result, enough to get much of the tedious work out of the way. Once you get the product from the wizard, if the wizard is not capable of doing something you need, you can tweak the results. Create a Report with the Report Wizard Let’s get started with your first report!  Launch SQL Server Data Tools (SSDT) from the Start menu under SQL Server 2012. Once SSDT is running, click New Project to launch the New Project dialog box. On the left side of the screen expand Business Intelligence and select Reporting Services. Configure the properties as shown in . Be sure to select Report Server Project Wizard as the type of report and to save the project in the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Project folder. Click OK and wait for the Report Wizard to launch. Click Next on the Welcome screen.  On the Select the Data Source screen, make sure that New data source is selected. Type JProCo as the data source name. Make sure that Microsoft SQL Server is selected in the Type dropdown. Click Edit to configure the connection string on the Connection Properties dialog box. If your SQL Server database server is installed on your local computer, type in localhost for the Server name and select the JProCo database from the Select or enter a database name dropdown. Click OK to dismiss the Connection Properties dialog box. Check Make this a shared data source and click Next. On the Design the Query screen, you can use the query builder to build a query if you wish. Since this post is not meant to teach you T-SQL queries, you will copy all queries from files that have been provided for you. In the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Resources folder open the sales by employee.sql file. Copy and paste the code from the file into the Query string Text Box. Click Next. On the Select the Report Type screen, choose Tabular and click Next. On the Design the Table screen, you have to figure out the groupings of the report. How do you do this? Well, you often need to know a bit about the data and report requirements. I often draw the report out on paper first to help me determine the groups. In the case of this report, I could group the data several ways. Do I want to see the data grouped by Year and Month? Do I want to see the data grouped by Employee or Category? The only thing I know for sure about this ahead of time is that the TotalSales goes in the Details section. Let’s assume that the CIO asked to see the data grouped first by Year and Month, then by Category. Let’s move the fields to the right-hand side. This is done by selecting Page > Group or Details >, as shown in, and click Next. On the Choose the Table Layout screen, select Stepped and check Include subtotals and Enable drilldown, as shown in. On the Choose the Style screen, choose any color scheme you wish (unlike the Model T) and click Next. I chose the default, Slate. On the Choose the Deployment Location screen, change the Deployment folder to Chapter 3 and click Next. At the Completing the Wizard screen, name your report Employee Sales and click Finish. After clicking Finish, the report and a shared data source will appear in the Solution Explorer and the report will also be visible in Design view. Click the Preview tab at the top. This report expects the user to supply a year which the report will then use as a filter. Type in a year between 2006 and 2013 and click View Report. Click the plus sign next to the Sales Year to expand the report to see the months, then expand again to see the categories and finally the details. You now have the assembly line report completed, and you probably already have some ideas on how to improve the report. Tomorrow’s Post Tomorrow’s blog post will show how to create your own data sources and data sets in SSRS. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • PSU failing or Mainboard failing?

    - by Andrei Rinea
    I am having some troubles lately powering on my desktop workstation. While starting up the PC after being off for hours (usually at least 8 hours) it randomly fails to do so. What happens is that : I press the power button; nothing happens I can hear a moderate buzzing noise at the back of the PC (near the PSU); but I can't say for sure that it's not from the mainboard. If I insist pressing the power button a few times in 1-2 minutes it'll start Another route would be that instead of (3) I will plug off the power cable from the PSU and wait for 30 seconds. Then I will press the power on and keep it for 30-60 seconds (I had some success at notebooks with a similar approach). Then I will plug back the cable in the PSU, press only once the power button and it will start normally. Also while running normally I keep hearing some low buzzing which seems to be fan-RPM-related (i.e. when processing images or doing CPU intensive work). What should I look into? UPDATE It's getting worse. It took more than 10 retries today and almost 20 minutes to start the computer. I tried the paperclip trick and the PSU behaves perfectly. I managed to start the computer like so : I pressed the on-button a few times and then left the PC in a pre-startup state (the fans were working the buzzing noise was strong and I went to eat. I thought I won't lit the house on fire so fast and without smelling. Back, after 10-15 min the computer booted up! Discussed with a fellow at Intel and he told me the capacitors on the mainboard are probably a bit shot. If they are shot, he said, it should start up warm perfectly. So I did restart it, warm, a few times (5 sec cooldown and then 40 sec cooldown and it started up perfectly). I can either replace the capacitors on the mainboard (doesn't sound worth it or replace the mainboard (this one sucks too :)) ) FINAL INFO : It was the PSU after all. Although it was powering the IDEs and SATAs the Mainboard power module was failing. I bought another mainboard just to find out that this wasn't the cause. Now I'll have to return it somehow. The spare PSU is now in the computer and doing well.. Although larger (500W), it's like a plane taking off.. I need a better one.

    Read the article

  • Exchange-Server Query

    - by Rudi Kershaw
    First, a little background. I've recently been taken on as a web and software developer for a small company, who has no other in-house IT support. They've been asking my opinion on lots of IT subjects that are quite far out of my comfort zone. I'm definitely not a network admin. Their IT consultancy contractor is pushing them to upgrade their dedicated exchange server, even though it seems like the one they currently have has a lot of life left in it and is running problem free. They say it's "coming to the natural end of it's life". They want to install a monster with a Xeon E5-2420, 32GB RAM, 2x 1TB HDDs, Windows Server 2012 and Microsoft Exchange 2010. They want to charge a small fortune for it. Basically, this system seems massively over the top seeing as it won't be doing anything else other than running as an exchange server for a company with less than 25 email accounts. My employers also have a file server system in-house that hosts three web apps, an SQL server, their local domain, print server and shared folders. That machine is using the same specs as the proposed new one, and it is barely using any of it's potential. I asked if Microsoft Exchange 2010 could be installed on their file server, but they said that MS Exchange can't run on the same system as an SQL server because for some reason they will eat up each others resources (even though the SQL server isn't touching 1% of the current system's CPU or RAM). My question is really, are they trying to rip my employers off? Could MS Exchange be installed on their other server (on a virtual instance or not), or does the old one even need replacing at all? Going with their current suggestion will cost the company in excess of £6k, and it seems entirely unnecessary. I apologies, because I know this is probably a little thin on details, but if I carry on I could end up writing a massive essay that no-one will want to read. I've been doing my research, but I'm not knowledgeable enough make any hard decisions. Let me know if you need any more details. Thank you for any help you can offer. Further Details: The new exchange would need to support Outlook Web App, 25 users, a few public mailboxes, and email exchange with Blackberries.

    Read the article

  • Updating the managed debugging API for .NET v4

    - by Brian Donahue
    In any successful investigation, the right tools play a big part in collecting evidence about the state of the "crime scene" as it was before the detectives arrived. Unfortunately for the Crash Scene Investigator, we don't have the budget to fly out to the customer's site, chalk the outline, and eat their doughnuts. We have to rely on the end-user to collect the evidence for us, which means giving them the fingerprint dust and the evidence baggies and leaving them to it. With that in mind, the Red Gate support team have been writing tools that can collect vital clues with a minimum of fuss. Years ago we would have asked for a memory dump, where we used to get the customer to run CDB.exe and produce dumps that we could analyze in-house, but those dumps were pretty unwieldy (500MB files) and the debugger often didn't dump exactly where we wanted, or made five or more dumps. What we wanted was just the minimum state information from the program at the time of failure, so we produced a managed debugger that captured every first and second-chance exception and logged the stack and a minimal amount of variables from the memory of the application, which could all be exported as XML. This caused less inconvenience to the end-user because it is much easier to send a 65KB XML file in an email than a 500MB file containing all of the application's memory. We don't need to have the entire victim shipped out to us when we just want to know what was under the fingernails. The thing that made creating a managed debugging tool possible was the MDbg Engine example written by Microsoft as part of the Debugging Tools for Windows distribution. Since the ICorDebug interface is a bit difficult to understand, they had kindly created some wrappers that provided an event-driven debugging model that was perfect for our needs, but .NET 4 applications under debugging started complaining that "The debugger's protocol is incompatible with the debuggee". The introduction of .NET Framework v4 had changed the managed debugging API significantly, however, without an update for the MDbg Engine code! After a few hours of research, I had finally worked out that most of the version 4 ICorDebug interface still works much the same way in "legacy" v2 mode and there was a relatively easy fix for the problem in that you can still get a reference to legacy ICorDebug by changing the way the interface is created. In .NET v2, the interface was acquired using the CreateDebuggingInterfaceFromVersion method in mscoree.dll. In v4, you must first create IClrMetaHost, enumerate the runtimes, get an ICLRRuntimeInfo interface to the .NET 4 runtime from that, and use the GetInterface method in mscoree.dll to return a "legacy" ICorDebug interface. The rest of the MDbg Engine will continue working the old way. Here is how I had changed the MDbg Engine code to support .NET v4: private void InitFromVersion(string debuggerVersion){if( debuggerVersion.StartsWith("v1") ){throw new ArgumentException( "Can't debug a version 1 CLR process (\"" + debuggerVersion + "\"). Run application in a version 2 CLR, or use a version 1 debugger instead." );} ICorDebug rawDebuggingAPI=null;if (debuggerVersion.StartsWith("v4")){Guid CLSID_MetaHost = new Guid("9280188D-0E8E-4867-B30C-7FA83884E8DE"); Guid IID_MetaHost = new Guid("D332DB9E-B9B3-4125-8207-A14884F53216"); ICLRMetaHost metahost = (ICLRMetaHost)NativeMethods.ClrCreateInterface(CLSID_MetaHost, IID_MetaHost); IEnumUnknown runtimes = metahost.EnumerateInstalledRuntimes(); ICLRRuntimeInfo runtime = GetRuntime(runtimes, debuggerVersion); //Defined in metahost.hGuid CLSID_CLRDebuggingLegacy = new Guid(0xDF8395B5, 0xA4BA, 0x450b, 0xA7, 0x7C, 0xA9, 0xA4, 0x77, 0x62, 0xC5, 0x20);Guid IID_ICorDebug = new Guid("3D6F5F61-7538-11D3-8D5B-00104B35E7EF"); Object res;runtime.GetInterface(ref CLSID_CLRDebuggingLegacy, ref IID_ICorDebug, out res); rawDebuggingAPI = (ICorDebug)res; }elserawDebuggingAPI = NativeMethods.CreateDebuggingInterfaceFromVersion((int)CorDebuggerVersion.Whidbey,debuggerVersion);if (rawDebuggingAPI != null)InitFromICorDebug(rawDebuggingAPI);elsethrow new ArgumentException("Support for debugging version " + debuggerVersion + " is not yet implemented");} The changes above will ensure that the debugger can support .NET Framework v2 and v4 applications with the same codebase, but we do compile two different applications: one targeting v2 and the other v4. As a footnote I need to add that some missing native methods and wrappers, along with the EnumerateRuntimes method code, came from the Mindbg project on Codeplex. Another change is that when using the MDbgEngine.CreateProcess to launch a process in the debugger, do not supply a null as the final argument. This does not work any more because GetCORVersion always returns "v2.0.50727" as the function has been deprecated in .NET v4. What's worse is that on a system with only .NET 4, the user will be prompted to download and install .NET v2! Not nice! This works much better: proc = m_Debugger.CreateProcess(ProcessName, ProcessArgs, DebugModeFlag.Default,String.Format("v{0}.{1}.{2}",System.Environment.Version.Major,System.Environment.Version.Minor,System.Environment.Version.Build)); Microsoft "unofficially" plan on updating the MDbg samples soon, but if you have an MDbg-based application, you can get it working right now by changing one method a bit and adding a few new interfaces (ICLRMetaHost, IEnumUnknown, and ICLRRuntimeInfo). The new, non-legacy implementation of MDbg Engine will add new, interesting features like dump-file support and by association I assume garbage-collection/managed object stats, so it will be well worth looking into if you want to extend the functionality of a managed debugger going forward.

    Read the article

  • Nerdstock 2012: A photo review of Microsoft TechEd North America 2012

    - by The Un-T Guy
    Not only could I not fathom that I would ever be attending a tech event of the magnitude of TechEd, neither could any of my co-workers.  As the least technical person in the history of Information Technology ever, I felt as though I were walking into the belly of the beast, fearing I’d not be allowed out until I could write SSIS packages, program in Visual Basic, or at least arm wrestle a DBA.  Most of my fears were unrealized.   But I made it.  I was here.  I even got to wear the Mark of the Geek neck package with schedule, eyeglass cleaners, name badge (company name obfuscated so they don’t fire me), and a pen.  The name  badge was seemingly the key element, as every vendor in the place wanted to scan it to capture name, email address, and numbers to show their bosses back home.  It also let me eat the food and drink the coffee so that’s a fair trade.   A recurring theme throughout the presentations and vendor demos was “the Cloud” and BYOD (bring your own device).  The below was a common site throughout the week, as attendees from all over the world brought their own devices and were able to (seemingly) seamlessly connect to the Worldwide Innerwebs.  Apparently proof that Microsoft and the event organizers were practicing what they were preaching.   “Cavernous” is one way to describe the downstairs facility itself.  “Freaking cavernous” might be more accurate.  Work sessions were held in classrooms on the second and third floors but the real action was happening downstairs.  Microsoft bookstore, blogger hub (shoutout to Geekswithblogs.net), The Wall (sans Pink Floyd, sadly), couches, recharging stations…   …a game zone with pool and air hockey tables, pinball machines, foosball…   …vintage video games…           …and a even giant chess board.  Looked like this guy was opening with the Kaspersky parry.   The blend of technology and fantasy even went so far as to bring childhood favorites to life.  Assuming, of course, your childhood was pre-video games (like mine) and you were stuck with electric football and Rock ‘em Sock ‘em robots:   And, lest the “combatants” become unruly or – God forbid – afternoon snacks were late, Orange County’s finest was on the scene to keep the peace.  On a high-tech mode of transport, of course.   She wasn’t the only one to think this was a swell way to transition from one concourse to the next.  Given the level of support provided by the entire Orange County Convention Center staff, I knew they had to have some secret.   Here’s one entrance to the vendor zone/”Technical Learning Center.”  Couldn’t help but think of them as the remora attached to the Whale Shark that is Microsoft…   …or perhaps planets orbiting the sun. Microsoft is just that huge and it seemed like every vendor in the industry looks forward to partnering with the tech behemoth.   Aside from the free stuff from the vendors, probably the most popular place in the house was the dining area.  Amazing spreads every day, multiple times a day.  While no attendance numbers were available at press time, literally thousands of attendees were fed, and fed well, every day.  And lest you think my post from earlier in the week exaggerated about the backpacks…   …or that I’m exaggerating about the lunch crowds.  This represents only about between 25-30% of the lunch crowd – it was all my camera could capture at once.  No one went away hungry.   The only thing missing was a a vat of Red Bull but apparently organizers went old school, with probably 100 urns of the original energy drink – coffee – all around the venue.   Of course, following lunch and afternoon sessions, some preferred the even older school method of re-energizing.  There were rumors that Microsoft was serving graham crackers and milk in this area.  But they were only rumors.   Cannot overstate the wonderful service provided by the Orange County Convention Center staff.  Coffee, soft drinks, juice, and water were available always.  Buffet meals were delicious with a wide range of healthy options available, in addition to hundreds (at least) special meal requests supported every day.  Ever tried to keep up with an estimated 9,000 hungry and thirsty IT-ers?  These folks did.  Kudos to all of the staff and many thanks!   And while I occasionally poke fun at the Whale Shark, if nothing else this experience convinced me of one thing:  Microsoft knows how to put on a professional event.  Hundreds of informative, professionally delivered sessions, covering a wide range of topics set at varying levels of expertise (some that even I was able to follow), social activities, vendor partnerships…they brought everything you could ask for to inform, educate, and inspire an entire IT industry.   So as I depart the belly of the beast, I can both take pride in the fact that I survived the week and marvel at the brilliance surrounding me.  The IT industry – or at least the segment associated with Microsoft – is in good, professional hands.  And what won’t fit in their hands can be toted in the Microsoft provided backpacks.  Win-win.   Until New Orleans…

    Read the article

  • Oracle Linux and Oracle VM pricing guide

    - by wcoekaer
    A few days ago someone showed me a pricing guide from a Linux vendor and I was a bit surprised at the complexity of it. Especially when you look at larger servers (4 or 8 sockets) and when adding virtual machine use into the mix. I think we have a very compelling and simple pricing model for both Oracle Linux and Oracle VM. Let me see if I can explain it in 1 page, not 10 pages. This pricing information is publicly available on the Oracle store, I am using the current public list prices. Also keep in mind that this is for customers using non-oracle x86 servers. When a customer purchases an Oracle x86 server, the annual systems support includes full use (all you can eat) of Oracle Linux, Oracle VM and Oracle Solaris (no matter how many VMs you run on that server, in case you deploy guests on a hypervisor). This support level is the equivalent of premier support in the list below. Let's start with Oracle VM (x86) : Oracle VM support subscriptions are per physical server on which you deploy the Oracle VM Server product. (1) Oracle VM Premier Limited - 1- or 2 socket server : $599 per server per year (2) Oracle VM Premier - more than 2 socket server (4, or 8 or whatever more) : $1199 per server per year The above includes the use of Oracle VM Manager and Oracle Enterprise Manager Cloud Control's Virtualization management pack (including self service cloud portal, etc..) 24x7 support, access to bugfixes, updates and new releases. It also includes all options, live migrate, dynamic resource scheduling, high availability, dynamic power management, etc If you want to play with the product, or even use the product without access to support services, the product is freely downloadable from edelivery. Next, Oracle Linux : Oracle Linux support subscriptions are per physical server. If you plan to run Oracle Linux as a guest on Oracle VM, VMWare or Hyper-v, you only have to pay for a single subscription per system, we do not charge per guest or per number of guests. In other words, you can run any number of Oracle Linux guests per physical server and count it as just a single subscription. (1) Oracle Linux Network Support - any number of sockets per server : $119 per server per year Network support does not offer support services. It provides access to the Unbreakable Linux Network and also offers full indemnification for Oracle Linux. (2) Oracle Linux Basic Limited Support - 1- or 2 socket servers : $499 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem. (3) Oracle Linux Basic Support - more than 2 socket server (4, or 8 or more) : $1199 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management. It includes ocfs2 as a clustered filesystem (4) Oracle Linux Premier Limited Support - 1- or 2 socket servers : $1399 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (5) Oracle Linux Premier Support - more than 2 socket servers : $2299 per server per year This subscription provides 24x7 support services, access to the Unbreakable Linux Network and the Oracle Support portal, indemnification, use of Oracle Clusterware for Linux HA and use of Oracle Enterprise Manager Cloud control for Linux OS management, XFS filesystem support. It also offers Oracle Lifetime support, backporting of patches for critical customers in previous versions of package and ksplice zero-downtime updates. (6) Freely available Oracle Linux - any number of sockets You can freely download Oracle Linux, install it on any number of servers and use it for any reason, without support, without right to use of these extra features like Oracle Clusterware or ksplice, without indemnification. However, you do have full access to all errata as well. Need support? then use options (1)..(5) So that's it. Count number of 2 socket boxes, more than 2 socket boxes, decide on basic or premier support level and you are done. You don't have to worry about different levels based on how many virtual instance you deploy or want to deploy. A very simple menu of choices. We offer, inclusive, Linux OS clusterware, Linux OS Management, provisioning and monitoring, cluster filesystem (ocfs), high performance filesystem (xfs), dtrace, ksplice, ofed (infiniband stack for high performance networking). No separate add-on menus. NOTE : socket/cpu can have any number of cores. So whether you have a 4,6,8,10 or 12 core CPU doesn't matter, we count the number of physical CPUs.

    Read the article

  • Gamification = -10#/3mo

    - by erikanollwebb
    One of the purposes of gamification of anything is to see if you can modify the behavior of the user. In the enterprise, that might mean getting sales people to enter more information into a CRM system, encouraging employees to update their HR records, motivating people to participate in forums and discussions, or process invoices more quickly.  Wikipedia defines behavior modification as "the traditional term for the use of empirically demonstrated behavior change techniques to increase or decrease the frequency of behaviors, such as altering an individual's behaviors and reactions to stimuli through positive and negative reinforcement of adaptive behavior and/or the reduction of behavior through its extinction, punishment and/or satiation."  Gamification is just a way to modify someone's behavior using game mechanics. And the magic question is always whether it works. So I thought I would present my own little experiment from the last few months.  This spring, I upgraded to a Samsung Galaxy 4.  It's a pretty sweet phone in many ways, but one of the little extras I discovered was a built in app called S Health. S Health is an app that you can use to track calories, weight, exercise and it has a built in pedometer. I looked at it when I got the phone, but assumed you had to turn it on to use it so I didn't look at it much.  But sometime in July, I realized that in fact, it just ran in the background and was quietly tracking my steps, with a goal of 10,000 per day.  10,000 steps per day is this magic number recommended by the Surgeon General and the American Heart Association.  Dr. Oz pushes it as the goal for daily exercise.  It's about 5 miles of walking. I'm generally not the kind of person who always has my phone with me.  I leave it in my purse and pull it out when I need it.  But then I realized that meant I wasn't getting a good measure of my steps.  I decided to do a little experiment, and carry it with me as much as possible for a week.  That's when I discovered the gamification that changed my life over the last 3 months.  When I hit 10,000 steps, the app jingled out a little "success!" tune and I got a badge.  I was hooked.  I started carrying my phone.  I started making sure I had shoes I could walk in with me.  I started walking at lunch time, because I realized how often I sat at my desk for 8-10 hours every day without moving.  I started pestering my husband to walk with me after work because I hadn't hit my 10,000 yet, leading him at one point to say "I'm not as much a slave to that badge as you are!"  I started looking at parking lots differently.  Can't get a space up close?  No worries, just that many steps toward my 10,000.  I even tried to see if there was a second power user level at 15,000 or 20,000 (*sadly, no).  If I was close at the end of the day, I have done laps around my house until I got my badge.  I have walked around the block one more time to get my badge.  I have mentally chastised myself when I forgot to put my phone in my pocket because I don't know how many steps I got.  The badge below I got when my boss and I were in New York City and we walked around the block of our hotel just to watch the badge pop up. There are a bunch of tools out on the market now that have similar ideas for helping you to track your exercise, make it social.  There are apps (my favorite is still Zombies, Run!).  You could buy a FitBit or UP by Jawbone.   Interactive fitness makes the Expresso stationary bike with built in video games.  All designed to help you be more aware of your activity and keep you engaged and motivated.  And the idea is to help you change your behavior. I know someone who would spend extra time and work hard on the Expresso because he had built up strategies for how to kill the most dragons while he was riding to get more points.  When the machine broke down, he didn't ride a different bike because it just wasn't that interesting. But for me, just the simple jingle and badge have been all I needed.  I admit, I still giggle gleefully when I hear the tune sing out from my pocket. After a few weeks, I noticed I had dropped a few pounds.  Not a lot, just 2-3.  But then I was really hooked.  I started making a point both to eat a little less and hit 10,000 steps as much as I could.  I bemoaned that during the floods in Boulder, I wasn't hitting my 10,000 steps.  And now, a few months later, I'm almost 10 lbs lighter. All for 1 badge a day. So yes, simple gamification can increase motivation and engagement.  And that can lead to changes in behavior.  Now the job is to apply that to the enterprise space in a meaningful and engaging way. 

    Read the article

  • Provocative Tweets From the Dachis Social Business Summit

    - by Mike Stiles
    On June 20, all who follow social business and how social is changing how we do business and internal business structures, gathered in London for the Dachis Social Business Summit. In addition to Oracle SVP Product Development, Reggie Bradford, brands and thought leaders posed some thought-provoking ideas and figures. Here are some of the most oft-tweeted points, and our thoughts that they provoked. Tweet: The winners will be those who use data to improve performance.Thought: Everyone is dwelling on ROI. Why isn’t everyone dwelling on the opportunity to make their product or service better (as if that doesn’t have an effect on ROI)? Big data can improve you…let it. Tweet: High performance hinges on integrated teams that interact with each other.Thought: Team members may work well with each other, but does the team as a whole “get” what other teams are doing? That’s the key to an integrated, companywide workforce. (Internal social platforms can facilitate that by the way). Tweet: Performance improvements come from making the invisible visible.Thought: Many of the factors that drive customer behavior and decisions are invisible. Through social, customers are now showing us what we couldn’t see before…if we’re paying attention. Tweet: Games have continuous feedback, which is why they’re so engaging.  Apply that to business operations.Thought: You think your employees have an obligation to be 100% passionate and engaged at all times about making you richer. Think again. Like customers, they must be motivated. Visible insight that they’re advancing on their goals helps. Tweet: Who can add value to the data?  Data will tend to migrate to where it will be most effective.Thought: Not everybody needs all the data. One team will be able to make sense of, use, and add value to data that may be irrelevant to another team. Like a strategized football play, the data has to get sent to the spot on the field where it’s needed most. Tweet: The sale isn’t the light at the end of the tunnel, it’s the start of a new marketing cycle.Thought: Another reason the ROI question is fundamentally flawed. The sale is not the end of the potential return on investment. After-the-sale service and nurturing begins where the sales “victory” ends. Tweet: A dead sale is one that’s not shared.  People must be incentivized to share.Thought: Guess what, customers now know their value to you as marketers on your behalf. They’ll tell people about your product, but you’ve got to answer, “Why should I?” And you’ve got to answer it with something substantial, not lame trinkets. Tweet: Social user motivations are competition, affection, excellence and curiosity.Thought: Your followers will engage IF; they can get something for doing it, love your culture so much they want you to win, are consistently stunned at the perfection and coolness of your products, or have been stimulated enough to want to know more. Tweet: In Europe, 92% surveyed said they couldn’t care less about brands.Thought: Oh well, so much for loving you or being impressed enough with your products & service that they want you to win. We’ve got a long way to go. Tweet: A complaint is a gift.Thought: Our instinct where complaints are concerned is to a) not listen, b) dismiss the one who complains as a kook, c) make excuses, and d) reassure ourselves with internal group-think that they’re wrong and we’re right. It’s the perfect recipe for how to never, ever grow or get better. In a way, this customer cares more than you do. Tweet: 78% of consumers think peer recommendation is the best form of advertising.  Eventually, engagement is going to eat advertising.Thought: Why is peer recommendation best? Trust. If a friend tells me how great a movie was, I believe him. He has credibility with me. He’s seen it, and he could care less if I buy a ticket. He’s telling me it was awesome because he sincerely believes that it was.  That’s gold. Tweet: 86% of customers are willing to pay more for a better customer experience. Thought: This “how mad can we make our customers without losing them” strategy has to end. The customer experience has actual monetary value, money you’re probably leaving on the table. @mikestilesPhoto: stock.xchng

    Read the article

< Previous Page | 9 10 11 12 13 14 15  | Next Page >