Search Results

Search found 8873 results on 355 pages for 'auto populate'.

Page 243/355 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • SQL Developer Debugging, Watches, Smart Data, & Data

    - by thatjeffsmith
    After presenting the SQL Developer PL/SQL debugger for about an hour yesterday at KScope12 in San Antonio, my boss came up and asked, “Now, would you really want to know what the Smart Data panel does?” Apparently I had ‘made up’ my own story about what that panel’s intent is based on my experience with it. Not good Jeff, not good. It was a very small point of my presentation, but I probably should have read the docs. The Smart Data tab displays information about variables, using your Debugger: Smart Data preferences. You can also specify these preferences by right-clicking in the Smart Data window and selecting Preferences. Debugger Smart Data Preferences, control number of variables to display The Smart Data panel auto-inspects the last X accessed variables. So if you have a program with 26 variables, instead of showing you all 26, it will just show you the last two variables that were referenced in your program. If you were to click on the ‘Data’ debug panel, you’ll see EVERYTHING. And if you only want to see a very specific set of values, then you should use Watches. The Smart Data Panel As I step through the code, the variables being tracked change as they are referenced. Only the most recent ones display. This is controlled by the ‘Maximum Locations to Remember’ preference. Step through the code, see the latest variables accessed The Data Panel All variables are displayed. Might be information overload on large PL/SQL programs where you have many dozens or even hundreds of variables to track. Shows everything all the time Watches Watches are added manually and only show what you ask for. Data on Demand – add a watch to track a specific variable Remember, you can interact with your data If you want to do more than just watch, you can mouse-right on a data element, and change the value of the variable as the program is running. This is one of the primary benefits to debugging over using DBMS_OUTPUT to track what’s happening in your program. Change the values while the program is running to test your ‘What if?’ scenarios

    Read the article

  • Grub2 -- Dualboot Ubuntu LTS 12.04 and Windows 7 -- Detects two Windows 7 (loader) entries

    - by DarkIron112
    this is the first question I have ever asked the Ubuntu Community. :D I'm fairly new to Ubuntu, but I understand the basics and know how to navigate the Terminal. I also know how to ask for/research my problems before asking for/ help. I have scoured the internet high and low and learned much of how Grub2 works. But nothing has helped me to solve my problem. My problem is this: I have a computer that has three hard drives. It previously had Windows XP, but I upgraded to Windows 7. I also installed Ubuntu 12.04 LTS (Precise Pangolin). During my installation of Windows 7, there was a failure and I had to restart the installation. Afterwards, I installed Ubuntu. After some trouble removing all traces of the XP OS (Ubuntu auto-detected it, but not Windows 7) I got the two OSes working flawlessly. Or, almost. When booting up, Grub2 used to display Ubuntu, Ubuntu Recovery Mode, Other Versions of Linux, memtest, followed by "Windows 7 (loader) on /dev/sda1" and "Windows 7 (loader) on /dev/sdb1". I eventually removed Recovery Mode, Other Versions, and Memtest. Now, when I run: sudo update-grub I get this print-out: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.2.0-26-generic Found initrd image: /boot/initrd.img-3.2.0-26-generic Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sdb1 I would like to remove "Windows 7 (loader) on /dev/sda1", as it is a broken entry that shouldn't exist, and must have been installed during my first Windows 7 attempt. I cannot find a Windows 7 entry in /etc/grub.d... And I don't know where to look. Here is a layout of my hard drives: /dev/sda1/ (1.82 TiB), NTFS ("Media") /dev/sdb1/ (100 Mib), NTFS ("System Reserved") /dev/sdb2/ (149 GiB), NTFS ("Windows 7") /dev/sdb3/ (149 GiB), Extended (" ") /dev/sdb4/ (145 GiB), ext4 (" ") /dev/sdb5/ (4 GiB), linux-swap (" ") /dev/sdc1/ (488.28 GiB), NTFS ("Downloads") /dev/sdc2/ (488.28 GiB), NTFS ("AltMedia") /dev/sdc3/ (886.45 GiB), NTFS ("Personal") unallocated (2.09 MiB), unallocated What I think has happened: Windows 7 installed first and badly. I installed it again. First, there was Windows XP to guide where the bootloader went to so it was put on /dev/sdb1/. But, the second time no such guide existed so the machine put another bootloader on /dev/sda1/. sda1, by the way, is the only partition on a 2TB drive. No boot record partition appears to exist according to gedit. I'm not sure where Grub2 is getting this information from. But, there it is. Is there anything somebody can do to help me? Or, is there any more information I should add? Thank you, community!

    Read the article

  • Responding to Invites

    - by Daniel Moth
    Following up from my post about Sending Outlook Invites here is a shorter one on how to respond. Whatever your choice (ACCEPT, TENTATIVE, DECLINE), if the sender has not unchecked the "Request Response" option, then send your response. Always send your response. Even if you think the sender made a mistake in keeping it on, send your response. Seriously, not responding is plain rude. If you knew about the meeting, and you are happy investing your time in it, and the time and location work for you, and there is an implicit/explicit agenda, then ACCEPT and send it. If one or more of those things don't work for you then you have a few options. Send a DECLINE explaining why. Reply with email to ask for further details or for a change to be made. If you don’t receive a response to your email, send a DECLINE when you've waited enough. Send a TENTATIVE if you haven't made up your mind yet. Hint: if they really require you there, they'll respond asking "why tentative" and you have a discussion about it. When you deem appropriate, instead of the options above, you can also use the counter propose feature of Outlook but IMO that feature has questionable interaction model and UI (on both sender and recipient) so many people get confused by it. BTW, two of my outlook rules are relevant to invites. The first one auto-marks as read the ACCEPT responses if there is no comment in the body of the accept (I check later who has accepted and who hasn't via the "Tracking" button of the invite). I don’t have a rule for the DECLINE and TENTATIVE cause typically I follow up with folks that send those.   The second rule ensures that all Invites go to a specific folder. That is the first folder I see when I triage email. It is also the only folder which I have configured to show a count of all items inside it, rather than the unread count - when sending a response to an invite the item disappears from the folder and hence it is empty and not nagging me. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Sqlite &amp; Entity Framework 4

    - by Dane Morgridge
    I have been working on a few client app projects in my spare time that need to persist small amounts of data and have been looking for an easy to use embedded database.  I really like db4o but I'm not wanting to open source this particular project so it was not an option.  Then I remembered that there was an ADO.NET provider for sqlite.  Being a fan of sqlite in general, I downloaded it and gave it an install.  The installer added tooling support for both Visual Studio 2008 & 2010 which is nice because I am working almost exclusively in 2010 at the moment.  I noticed that the provider also had support for Entity Framework, but not specifically v4.  I created a database using the tools that get installed with Visual Studio and all seemed to work fine.  I went on to create an Entity Framework context and selected the sqlite database and to my surprise it worked with out any problems.  The model showed up just like it would for any database and so I started to write a little code to test and then.. BAM!.. Exception. "Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information." A quick bit of searching on Bing found the answer.  To get it working, you need to include the following code in your web.config file: 1: <startup useLegacyV2RuntimeActivationPolicy="true"> 2: <supportedRuntime version="v4.0" /> 3: </startup> And then everything magically works.  Entity Framework 4 features worked, like lazy loading and even the POCO templates worked.  The only thing that didn't work was the model first development.  The SQL generated was for SQL Server and of course wouldn't run on sqlite without some modifications. The only other oddity I found was that in order to have an auto incrementing id, you have to use the full integer data type for sqlite; a regular int won't do the trick.  This translates to an Int64, or a long when working with it in Entity Framework.  Not a big deal, but something you need to be aware of. All in all, I am quite impressed with the Entity Framework support I found with sqlite.  I wasn't really expecting much at all, and I was pleasantly surprised. I downloaded the ADO.NET sqlite provider from http://sqlite.phxsoftware.com/.  If you want to use an embedded database with Entity Framework, give it a look.  It will be well worth your time.

    Read the article

  • EPM 11.1.2 - R&A DATABASE CONNECTIONS DISAPPEAR FROM THE "DATABASE CONNECTION MANAGER

    - by Powder
    When accessing the database connection panel through Reporting and Analysis all previously entered database connection do not appear. This is due to a bug in the Windows SMB2 protocol. To work around this bug you have to disable the protocol. On Windows 2008 the protocol is automatically enabled. This needs to be done on both the servers and the clients. Note that “server” is the server which hosts RAF repository service and RM1 folder, “client” – server which hosts replicated Repository service that accesses repository files via network i.e. \\<server_host>\RM1  In order to disable SMB 2.0 on the server side, follow these steps:  1. Run "regedit" on Windows Server 2008 based computer.  2. Expand and locate the sub tree as follows.  HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters  3. Add a new REG_DWORD key with the name of "Smb2" (without quotation mark)  Value name: Smb2  Value type: REG_DWORD  0 = disabled  1 = enabled 4. Set the value to 0 to disable SMB 2.0, or set it to 1 to re-enable SMB 2.0.  5. Reboot the server.  To disable SMB 2.0 for Windows Vista or Windows Server 2008 systems that are the “client” systems run the following commands:  sc config lanmanworkstation depend= bowser/mrxsmb10/nsi  sc config mrxsmb20 start= disabled  Note there's an extra " " (space) after the "=" sign.  To enable back SMB 2.0 for Windows Vista or Windows Server 2008 systems that  are the “client” systems run the following commands: sc config lanmanworkstation depend= bowser/mrxsmb10/mrxsmb20/nsi  sc config mrxsmb20 start= auto  Again, note there's an extra " " (space) after the "=" sign. 

    Read the article

  • Handling Coding Standards at Work (I'm not the boss)

    - by Josh Johnson
    I work on a small team, around 10 devs. We have no coding standards at all. There are certain things that have become the norm but some ways of doing things are completely disparate. My big one is indentation. Some use tabs, some use spaces, some use a different number of spaces, which creates a huge problem. I often end up with conflicts when I merge because someone used their IDE to auto format and they use a different character to indent than I do. I don't care which we use I just want us all to use the same one. Or else I'll open a file and some lines have curly brackets on the same line as the condition while others have them on the next line. Again, I don't mind which one so long as they are all the same. I've brought up the issue of standards to my direct manager, one on one and in group meetings, and he is not overly concerned about it (there are several others who share the same view as myself). I brought up my specific concern about indentation characters and he thought a better solution would be to, "create some kind of script that could convert all that when we push/pull from the repo." I suspect that he doesn't want to change and this solution seems overly complicated and prone to maintenance issues down the road (also, this addresses only one manifestation of a larger issue). Have any of you run into a similar situation at work? If so, how did you handle it? What would be some good points to help sell my boss on standards? Would starting a grass roots movement to create coding standards, among those of us who are interested, be a good idea? Am I being too particular, should I just let it go? Thank you all for your time. Note: Thanks everyone for the great feedback so far! To be clear, I don't want to dictate One Style To Rule Them All. I'm willing to concede my preferred way of doing something in favor of what suits everyone the best. I want consistency and I want this to be a democracy. I want it to be a group decision that everyone agrees on. True, not everyone will get their way, but I'm hoping that everyone will be mature enough to compromise for the betterment of the group. Note 2: Some people are getting caught up in the two examples I gave above. I'm more after the heart of the matter. It manifests itself with many examples: naming conventions, huge functions that should be broken up, should something go in a util or service, should something be a constant or injected, should we all use different versions of a dependency or the same, should an interface be used for this case, how should unit tests be set up, what should be unit tested, (Java specific) should we use annotations or external config. I could go on.

    Read the article

  • Is the science of Computer Science dead?

    - by Veaviticus
    Question : Is the science and art of CS dead? By that I mean, the real requirements to think, plan and efficiently solve problems seems to be falling away from CS these days. The field seems to be lowering the entry-barrier so more people can 'program' without having to learn how to truly program. Background : I'm a recent graduate with a BS in Computer Science. I'm working a starting position at a decent sized company in the IT department. I mostly do .NET and other Microsoft technologies at my job, but before this I've done Java stuff through internships and the like. I personally am a C++ programmer for my own for-fun projects. In Depth : Through the work I've been doing, it seems to me that the intense disciplines of a real science don't exist in CS anymore. In the past, programmers had to solve problems efficiently in order for systems to be robust and quick. But now, with the prevailing technologies like .NET, Java and scripting languages, it seems like efficiency and robustness have been traded for ease of development. Most of the colleagues that I work with don't even have degrees in Computer Science. Most graduated with Electrical Engineering degrees, a few with Software Engineering, even some who came from tech schools without a 4 year program. Yet they get by just fine without having the technical background of CS, without having studied theories and algorithms, without having any regard for making an elegant solution (they just go for the easiest, cheapest solution). The company pushes us to use Microsoft technologies, which take all the real thought out of the matter and replace it with libraries and tools that can auto-build your project for you half the time. I'm not trying to hate on the languages, I understand that they serve a purpose and do it well, but when your employees don't know how a hash-table works, and use the wrong sorting methods, or run SQL commands that are horribly inefficient (but get the job done in an acceptable time), it feels like more effort is being put into developing technologies that coddle new 'programmers' rather than actually teaching people how to do things right. I am interested in making efficient and, in my opinion, beautiful programs. If there is a better way to do it, I'd rather go back and refactor it than let it slide. But in the corporate world, they push me to complete tasks quickly rather than elegantly. And that really bugs me. Is this what I'm going to be looking forward to the rest of my life? Are there still positions out there for people who love the science and art of CS rather than just the paycheck? And on the same note, here's a good read if you haven't seen it before The Perils Of Java Schools

    Read the article

  • Fixing the #mvvmlight code snippets in Visual Studio 11

    - by Laurent Bugnion
    If you installed the latest MVVM Light version for Windows 8, you may encounter an issue where code snippets are not displayed correctly in the Intellisense popup. I am working on a fix, but for now here is how you can solve the issue manually. The code snippets MVVM Light, when installed correctly, will install a set of code snippets that are very useful to allow you to type less code. As I use to say, code is where bugs are, so you want to type as little of that as possible ;) With code snippets, you can easily auto-insert segments of code and easily replace the keywords where needed. For instance, every coder who uses MVVM as his favorite UI pattern for XAML based development is used to the INotifyPropertyChanged implementation, and how boring it can be to type these “observable properties”. Obviously a good fix would be something like an “Observable” attribute, but that is not supported in the language or the framework for the moment. Another fix involves “IL weaving”, which is a post-build operation modifying the generate IL code and inserting the “RaisePropertyChanged” instruction. I admire the invention of those who developed that, but it feels a bit too much like magic to me. I prefer more “down to earth” solutions, and thus I use the code snippets. Fixing the issue Normally, you should see the code snippets in Intellisense when you position your cursor in a C# file and type mvvm. All MVVM Light snippets start with these 4 letters. Normal MVVM Light code snippets However, in Windows 8 CP, there is an issue that prevents them to appear correctly, so you won’t see them in the Intellisense windows. To restore that, follow the steps: In Visual Studio 11, open the menu Tools, Code Snippets Manager. In the combobox, select Visual C#. Press Add… Navigate to C:\Program Files (x86)\Laurent Bugnion (GalaSoft)\Mvvm Light Toolkit\SnippetsWin8 and select the CSharp folder. Press Select Folder. Press OK to close the Code Snippets Manager. Now if you type mvvm in a C# file, you should see the snippets in your Intellisense window. Cheers Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Best Design Pattern for Coupling User Interface Components and Data Structures

    - by szahn
    I have a windows desktop application with a tree view. Due to lack of a sound data-binding solution for a tree view, I've implemented my own layer of abstraction on it to bind nodes to my own data structure. The requirements are as follows: Populate a tree view with nodes that resemble fields in a data structure. When a node is clicked, display the appropriate control to modify the value of that property in the instance of the data structure. The tree view is populated with instances of custom TreeNode classes that inherit from TreeNode. The responsibility of each custom TreeNode class is to (1) format the node text to represent the name and value of the associated field in my data structure, (2) return the control used to modify the property value, (3) get the value of the field in the control (3) set the field's value from the control. My custom TreeNode implementation has a property called "Control" which retrieves the proper custom control in the form of the base control. The control instance is stored in the custom node and instantiated upon first retrieval. So each, custom node has an associated custom control which extends a base abstract control class. Example TreeNode implementation: //The Tree Node Base Class public abstract class TreeViewNodeBase : TreeNode { public abstract CustomControlBase Control { get; } public TreeViewNodeBase(ExtractionField field) { UpdateControl(field); } public virtual void UpdateControl(ExtractionField field) { Control.UpdateControl(field); UpdateCaption(FormatValueForCaption()); } public virtual void SaveChanges(ExtractionField field) { Control.SaveChanges(field); UpdateCaption(FormatValueForCaption()); } public virtual string FormatValueForCaption() { return Control.FormatValueForCaption(); } public virtual void UpdateCaption(string newValue) { this.Text = Caption; this.LongText = newValue; } } //The tree node implementation class public class ExtractionTypeNode : TreeViewNodeBase { private CustomDropDownControl control; public override CustomControlBase Control { get { if (control == null) { control = new CustomDropDownControl(); control.label1.Text = Caption; control.comboBox1.Items.Clear(); control.comboBox1.Items.AddRange( Enum.GetNames( typeof(ExtractionField.ExtractionType))); } return control; } } public ExtractionTypeNode(ExtractionField field) : base(field) { } } //The custom control base class public abstract class CustomControlBase : UserControl { public abstract void UpdateControl(ExtractionField field); public abstract void SaveChanges(ExtractionField field); public abstract string FormatValueForCaption(); } //The custom control generic implementation (view) public partial class CustomDropDownControl : CustomControlBase { public CustomDropDownControl() { InitializeComponent(); } public override void UpdateControl(ExtractionField field) { //Nothing to do here } public override void SaveChanges(ExtractionField field) { //Nothing to do here } public override string FormatValueForCaption() { //Nothing to do here return string.Empty; } } //The custom control specific implementation public class FieldExtractionTypeControl : CustomDropDownControl { public override void UpdateControl(ExtractionField field) { comboBox1.SelectedIndex = comboBox1.FindStringExact(field.Extraction.ToString()); } public override void SaveChanges(ExtractionField field) { field.Extraction = (ExtractionField.ExtractionType) Enum.Parse(typeof(ExtractionField.ExtractionType), comboBox1.SelectedItem.ToString()); } public override string FormatValueForCaption() { return string.Empty; } The problem is that I have "generic" controls which inherit from CustomControlBase. These are just "views" with no logic. Then I have specific controls that inherit from the generic controls. I don't have any functions or business logic in the generic controls because the specific controls should govern how data is associated with the data structure. What is the best design pattern for this?

    Read the article

  • Serving up a RSS feed in MVC using WCF Syndication

    - by brian_ritchie
    With .NET 3.5, Microsoft added the SyndicationFeed class to WCF for generating ATOM 1.0 & RSS 2.0 feeds.  In .NET 3.5, it lives in System.ServiceModel.Web but was moved into System.ServiceModel in .NET 4.0. Here's some sample code on constructing a feed: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: SyndicationFeed feed = new SyndicationFeed(title, description, new Uri(link)); 2: feed.Categories.Add(new SyndicationCategory(category)); 3: feed.Copyright = new TextSyndicationContent(copyright); 4: feed.Language = "en-us"; 5: feed.Copyright = new TextSyndicationContent(DateTime.Now.Year + " " + ownerName); 6: feed.ImageUrl = new Uri(imageUrl); 7: feed.LastUpdatedTime = DateTime.Now; 8: feed.Authors.Add(new SyndicationPerson() { Name = ownerName, Email = ownerEmail }); 9:   10: var feedItems = new List<SyndicationItem>(); 11: foreach (var item in Items) 12: { 13: var sItem = new SyndicationItem(item.title, null, new Uri(link)); 14: sItem.Summary = new TextSyndicationContent(item.summary); 15: sItem.Id = item.id; 16: if (item.publishedDate != null) 17: sItem.PublishDate = (DateTimeOffset)item.publishedDate; 18: sItem.Links.Add(new SyndicationLink() { Title = item.title, Uri = new Uri(link), Length = item.size, MediaType = item.mediaType }); 19: feedItems.Add(sItem); 20: } 21: feed.Items = feedItems;   Then, we create a custom ContentResult to serialize the feed & stream it to the client: 1: public class SyndicationFeedResult : ContentResult 2: { 3: public SyndicationFeedResult(SyndicationFeed feed) 4: : base() 5: { 6: using (var memstream = new MemoryStream()) 7: using (var writer = new XmlTextWriter(memstream, System.Text.UTF8Encoding.UTF8)) 8: { 9: feed.SaveAsRss20(writer); 10: writer.Flush(); 11: memstream.Position = 0; 12: Content = new StreamReader(memstream).ReadToEnd(); 13: ContentType = "application/rss+xml" ; 14: } 15: } 16: } Finally, we wire it up through the controller: 1: public class RssController : Controller 2: { 3: public SyndicationFeedResult Feed() 4: { 5: var feed = new SyndicationFeed(); 6: // populate feed... 7: return new SyndicationFeedResult(feed); 8: } 9: }   In the next post, I'll discuss how to add iTunes markup to the feed to publish it on iTunes as a Podcast. 

    Read the article

  • Oracle ERP Cloud Solution Defines Revenue Recognition Software Market

    - by Steve Dalton
    Normal 0 false false false EN-US X-NONE X-NONE Revenue is a fundamental yardstick of a company's performance, and one of the most important metrics for investors in the capital markets. So it’s no surprise that the accounting standard boards have devoted significant resources to this topic, with a key goal of ensuring that companies use a consistent method of recognizing revenue. Due to the myriad of revenue-generating transactions, and the divergent ways organizations recognize revenue today, the IFRS and FASB have been working for 12 years on a common set of accounting standards that apply to all industries in virtually all countries. Through their joint efforts on May 28, 2014 the FASB and IFRS released the IFRS 15 / ASU 2014-9 (Revenue from Contracts with Customers) converged accounting standard. This standard applies to revenue in all public companies, but heavily impacts organizations in any industry that might have complex sales contracts with multiple distinct deliverables (obligations). For example, an auto dealer who bundles free service with the sale of a car can only recognize the service revenue once the owner of the car brings it in for work. Similarly, high-tech companies that bundle software licenses, consulting, and support services on a sales contract will recognize bundled service revenue once the services are delivered. Now all companies need to review their revenue for hidden bundling and implicit obligations. Numerous time-consuming and judgmental activities must be performed to properly recognize revenue for complex sales contracts. To illustrate, after the contract is identified, organizations must identify and examine the distinct deliverables, determine the estimated selling price (ESP) for each deliverable, then allocate the total contract price to each deliverable based on the ESPs. In terms of accounting, organizations must determine whether the goods or services have been delivered or performed to the customer’s satisfaction, then either book revenue in the current period or record a liability for the obligation if revenue will be recognized in a future accounting period. Oracle Revenue Management Cloud was architected and developed so organizations can simplify and streamline revenue recognition. Among other capabilities, the solution uses business rules to efficiently identify and examine contracts, intelligently calculate and allocate deliverable prices based on prescribed inputs, and accurately recognize revenue for each deliverable based on customer satisfaction. "Oracle works very closely with our customers, the Big 4 accounting firms, and the accounting standard boards to deliver an adaptive, comprehensive, new generation revenue recognition solution,” said Rondy Ng, Senior Vice President, Applications Development. “With the recently announced IFRS 15 / ASU 2014-9, Oracle is ready to support customer adoption of the new standard with our Revenue Management Cloud,” said Rondy. Oracle Revenue Management Cloud, an integral part of Oracle Financials Cloud, helps organizations comply with accounting standards, provides them with confidence that reported revenue is materially accurate, and simplifies the accounting process for revenue recognition. Stay tuned to this blog for regular updates on Oracle Revenue Management Cloud. We also invite you to review our new oracle.com ERP pages @ oracle.com/erp. We will be updating these pages very soon with more information about Oracle Revenue Management Cloud.

    Read the article

  • Highlights from recent Yammer video

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} A few weeks back, Ryan Kennedy of Yammer gave a talk about Berkeley DB Java Edition. You can find it posted here on Alex Popescu's Blog, or go directly to the video post itself. It was full of useful nuggets of information, such as why they chose to use BDB JE, performance, and some tips & tricks at the end. At over 40 minutes, the video is quite long. Ryan is an entertaining speaker, so I suggest you watch all of it. But if you only have time for the highlights, here are some times you can sync to:  06:18 hear the Berkeley DB JE features that caused Yammer select it, including: replication auto leader election, failover configurable durability and consistency guarantees 23:10 System performance characteristics 35:08 Check out the tips and tricks for using Berkeley DB JE I know the Berkeley DB development team is very pleased that BDB JE is working out well for Yammer. We definitely encourage others out there to take note of this success, especially if your requirements are similar to Yammer's (which Ryan outlines at the beginning of his talk)

    Read the article

  • UDK game Prisoners/Guards

    - by RR_1990
    For school I need to make a little game with UDK, the concept of the game is: The player is the headguard, he will have some other guard (bots) who will follow him. Between the other guards and the player are some prisoners who need to evade the other guards. It needs to look like this My idea was to let the guard bots follow the player at a certain distance and let the prisoners bots in the middle try to evade the guard bots. Now is the problem i'm new to Unreal Script and the school doesn't support me that well. Untill now I have only was able to make the guard bots follow me. I hope you guys can help me or make me something that will make this game work. Here is the class i'm using to let te bots follow me: class ChaseControllerAI extends AIController; var Pawn player; var float minimalDistance; var float speed; var float distanceToPlayer; var vector selfToPlayer; auto state Idle { function BeginState(Name PreviousStateName) { Super.BeginState(PreviousStateName); } event SeePlayer(Pawn p) { player = p; GotoState('Chase'); } Begin: player = none; self.Pawn.Velocity.x = 0.0; self.Pawn.Velocity.Y = 0.0; self.Pawn.Velocity.Z = 0.0; } state Chase { function BeginState(Name PreviousStateName) { Super.BeginState(PreviousStateName); } event PlayerOutOfReach() { `Log("ChaseControllerAI CHASE Player out of reach."); GotoState('Idle'); } // class ChaseController extends AIController; CONTINUED // State Chase (continued) event Tick(float deltaTime) { `Log("ChaseControllerAI in Event Tick."); selfToPlayer = self.player.Location - self.Pawn.Location; distanceToPlayer = Abs(VSize(selfToPlayer)); if (distanceToPlayer > minimalDistance) { PlayerOutOfReach(); } else { self.Pawn.Velocity = Normal(selfToPlayer) * speed; //self.Pawn.Acceleration = Normal(selfToPlayer) * speed; self.Pawn.SetRotation(rotator(selfToPlayer)); self.Pawn.Move(self.Pawn.Velocity*0.001); // or *deltaTime } } Begin: `Log("Current state Chase:Begin: " @GetStateName()@""); } defaultproperties { bAdjustFromWalls=true; bIsPlayer= true; minimalDistance = 1024; //org 1024 speed = 500; }

    Read the article

  • Ubuntu boots to terminal on start up

    - by Jules
    For a long time I've been unable to get updates due to a "repositories not found" error. Yesterday someone fixed this for me but after installing 94 days worth of updates my system wanted to restart. It looks like it is booting normally but then it opens a terminal and asks for my login and password. I had tried Ctrl+ Alt +F7 and startx to no avail. Here is everything that appears on screen when I turn the computer on. Ubuntu 10.04.4 LTS box-o-doom tty1 box-o-doom login:julian password: last login: Sun Jul 8 10:28:02 BST tty1 Linux box-o-doom 2.6.32-41-generic-pae #91-Ubuntu SMP Wed Jun 13 12:00:09 UTC 20 12 i686 GNU/Linux Ubuntu 10.04.4 LTS Welcome to Ubuntu! *Documentation: http://help.ubuntu.com julian@box-o-doom:~$_ i then tried dmesg which produced hundreds of lines all very similar to the first line reproduced here [ 9.453119] type=1505 audit1341742405.022:10): operation="profile_replace" pid=743 name="/usr/lib/connman/scripts/dhclient-script" follwed by this at the end [ 9.475880] alloc irq_desc for 27 on node-1 [ 9.475883] alloc kstat_irqs on node-1 [ 9.475890]forcedeth 0000:00:07.0: irq27 for MSI/MSI-X [ 9.760031] hda_code:ALC662 rev1: BIOS auto-probing. [ 10.048095] input:HDA Digital PCBeep as /devices/pci 0000:00:05.o/inp ut/input6 [ 10.862278] ppdev: user-space parallel port driver [ 20.268018] eth0: no IPv6 routers present julian@box-o-doom:~$_ results of startx lots of text scrolls off the screen and i have no way of reading it. but everything i can see is reproduced below current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version Markers: (--) probed, (**) from config file, (==) defult setting, (++) from command line, (!!) notice, (II) informational. (WW) Warning, (EE) error, (NI) not implemented, (??) unknown. (==) log file: "/var/log/Xorg.0.log", Time: SUn Jul 8 12:02:23 2012 (==) using config file: "/etc/X11/xorg.conf" (==)using config directory: "/usr/lib/X11/xorg.conf.d" FATAL: Module nvidia not found. (EE) NVIDIA: Failed to load the NVIDIA kernal module please check your (EE) NVIDIA: systems kernal log for aditional error messages. (EE) Failed to load module "nvidia" (module specific error, 0) (EE) No drivers available. Fatal server error: no screens found please consult the X.org foundation support at http://wiki.x.org for help please also check the log files at "/var/log/X.org.0.log" for aditional informati on ddxSigGiveUp: Closing log giving up xinit: No such file or directory (errno 2): unable to connect to X server xinit: No suck process (errno 3): server error julian@box-o-doom:~$_

    Read the article

  • Lubuntu Desktop messed up for logged in user, but not for guest

    - by RPi Awesomeness
    I recently upgraded my laptop from Lubuntu 12.04 to 14.04.1 and the upgrade process seemed to go fine. However, when I went to login as my normal user, I encountered an issue. The background loaded up, but none of LXDE or LXPanel showed up, leaving me with an empty desktop and nothing else except two errors. I thought that this was weird, so I just figured something had been messed up and would be fixed by a reboot. But it wasn't. I then tried logging in as guest, and it's just fine. I checked the ~/.xsession-errors file (for my main user, not guest, did it via TTY1) and this is what I got: Script for ibus started at run_im. Script for auto started at run_im. Script for default started at run_im. init: Unable to register as subreaper: Invalid argument init: lxsession main process (1649) killed by TERM signal init: Disconnected from notified D-Bus bus init: job dbus failed to stop init: job upstart-dbus-session-bridge failed to stop init: job upstart-dbus-system-bridge failed to stop init: job upstart-file-bridge failed to stop I also read the sometimes removing the ~/.Xauthority file can help, if the ownership is messed up. ls -l /home/MYUSER/.Xauthority tells me -rw------- 1 MYUSER MYUSER 60 Aug 16 09:57 /home/MYUSER/.Xauthority. Should that be root or something else, or should I try deleting that and ~/.profile. Here's what ~/.profile looks like: # ~/.profile: executed by the command interpreter for login shells. # This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login # exists. # see /usr/share/doc/bash/examples/startup-files for examples. # the files are located in the bash-doc package. # the default umask is set in /etc/profile; for setting the umask # for ssh logins, install and configure the libpam-umask package. #umask 022 # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi # set PATH so it includes user's private bin if it exists if [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH" fi Should I post the output of dmesg? I'll try and get a screenshot, but does anyone have any idea what could be causing the desktop (LXDE/LXPanel) not to display? EDIT I attempted removing the ~/.XAuthority file, but that didn't seem to do anything.

    Read the article

  • Recent improvements in Console Performance

    - by loren.konkus
    Recently, the WebLogic Server development and support organizations have worked with a number of customers to quantify and improve the performance of the Administration Console in large, distributed configurations where there is significant latency in the communications between the administration server and managed servers. These improvements fall into two categories: Constraining the amount of time that the Console stalls waiting for communication Reducing and streamlining the amount of data required for an update A few releases ago, we added support for a configurable domain-wide mbean "Invocation Timeout" value on the Console's configuration: general, advanced section for a domain. The default value for this setting is 0, which means wait indefinitely and was chosen for compatibility with the behavior of previous releases. This configuration setting applies to all mbean communications between the admin server and managed servers, and is the first line of defense against being blocked by a stalled or completely overloaded managed server. Each site should choose an appropriate timeout value for their environment and network latency. In the next release of WebLogic Server, we've added an additional console preference, "Management Operation Timeout", to the Console's shared preference page. This setting further constrains how long certain console pages will wait for slowly responding servers before returning partial results. While not all Console pages support this yet, key pages such as the Servers Configuration and Control table pages and the Deployments Control pages have been updated to support this. For example, if a user requests a Servers Table page and a Management Operation Timeout occurs, the table is displayed with both local configuration and remote runtime information from the responding managed servers and only local configuration information for servers that did not yet respond. This means that a troublesome managed server does not impede your ability to manage your domain using the Console. To support these changes, these Console pages have been re-written to use the Work Management feature of WebLogic Server to interact with each server or deployment concurrently, which further improves the responsiveness of these pages. The basic algorithm for these pages is: For each configuration mbean (ie, Servers) populate rows with configuration attributes from the fast, local mbean server Find a WorkManager For each server, Create a Work instance to obtain runtime mbean attributes for the server Schedule Work instance in the WorkManager Call WorkManager.waitForAll to wait WorkItems to finish, constrained by Management Operation Timeout For each WorkItem, if the runtime information obtained was not complete, add a message indicating which server has incomplete data Display collected data in table In addition to these changes to constrain how long the console waits for communication, a number of other changes have been made to reduce the amount and scope of managed server interactions for key pages. For example, in previous releases the Deployments Control table looked at the status of a deployment on every managed server, even those servers that the deployment was not currently targeted on. (This was done to handle an edge case where a deployment's target configuration was changed while it remained running on previously targeted servers.) We decided supporting that edge case did not warrant the performance impact for all, and instead only look at the status of a deployment on the servers it is targeted to. Comprehensive status continues to be available if a user clicks on the 'status' field for a deployment. Finally, changes have been made to the System Status portlet to reduce its impact on Console page display times. Obtaining health information for this display requires several mbean interactions with managed servers. In previous releases, this mbean interaction occurred with every display, and any delay or impediment in these interactions was reflected in the display time for every page. To reduce this impact, we've made several changes in this portlet: Using Work Management to obtain health concurrently Applying the operation timeout configuration to constrain how long we will wait Caching health information to reduce the cost during rapid navigation from page to page and only obtaining new health information if the previous information is over 30 seconds old. Eliminating heath collection if this portlet is minimized. Together, these Console changes have resulted in significant performance improvements for the customers with large configurations and high latency that we have worked with during their development, and some lesser performance improvements for those with small configurations and very fast networks. These changes will be included in the 11g Rel 1 patch set 2 (10.3.3.0) release of WebLogic Server.

    Read the article

  • Collision detection when pathfinding with pathnodes, UDK

    - by Dave Voyles
    I'm trying to create a class that allows my AIController to path find using pathnodes (NOT NavMeshes). It's doing a swell job of going from point to point in a set order (although I would like for it to be a random patrol at some point), but it gets caught up on collision from time to time. I.E. He'll walk the same set path, and when he runs into the blocks in the middle of the map he continues to rub against them until they finish, and continues on his merry way to the next path node. How can I prevent this from happening, or at least have him move from the wall if he does a trace and detects that it is there? It looks like I need to use MoveToward() instead of MoveTo(), as MoveToward allows the pawn to adjust its course during movement. I'm just not sure of how to use those paramters. Mougli has a decent tutorial on it[/URL], but I can't seem to get it to work correctly with my pathnode array. class PathfindingAIController extends UDKBot; var array Waypoints; var int _PathNode; //declare it at the start so you can use it throughout the script var int CloseEnough; simulated function PostBeginPlay() { local PathNode Current; super.PostBeginPlay(); //add the pathnodes to the array foreach WorldInfo.AllActors(class'Pathnode',Current) { Waypoints.AddItem( Current ); } } simulated function Tick(float DeltaTime) { local int Distance; local Rotator DesiredRotation; super.Tick(DeltaTime); if (Pawn != None) { // Smoothly rotate the pawn towards the focal point DesiredRotation = Rotator(GetFocalPoint() - Pawn.Location); Pawn.FaceRotation(RLerp(Pawn.Rotation, DesiredRotation, 3.125f * DeltaTime, true), DeltaTime); } Distance = VSize2D(Pawn.Location - Waypoints[_PathNode].Location); if (Distance <= CloseEnough) { _PathNode++; } if (_PathNode >= Waypoints.Length) { _PathNode = 0; } GoToState('Pathfinding'); } auto state Pathfinding { Begin: if (Waypoints[_PathNode] != None) // make sure there is a pathnode to move to { MoveTo(Waypoints[_PathNode].Location); //move to it `log("STATE: Pathfinding"); } } DefaultProperties { CloseEnough=400 bIsplayer = True }

    Read the article

  • IE9 Loses Some CSS After Particular Form Submit [migrated]

    - by Asherion
    The site I am editing has a search form. For the record, there are several other forms on the site, contact and the like. This is the only one with an issue. Upon submission of the form, SOME of the styling is lost in IE9 (possibly other versions of IE, haven't tested that yet). Primarily, the margins and colors set in html and body appear to have been lost. Menus, banner, text, etc all appear to retain styles. All styles are on one sheet, that are used here... Any helpful advice? Here is the contents of the search page and the php used to check for the form, if that helps, and the css that I think is lost. THE HTML: <div id="search"> <br /> <div style="float:right;font-size:.8em;"> <form name="form_sidesearch" action="search.html" method="post"> <input type="hidden" name="action" value="search" /> <input type="text" name="search_value" value="<?php echo $systems_primary->search_value ?>" /> <input type="submit" name="submit_search" value="Search Website" /> </form> <br /> </div> </div> <?php echo stripslashes($search_results); THE PHP: <?php // -- Begin Search -------------------------------------------------------------------------------------- if($_REQUEST["action"] === "search") { if(strlen($_REQUEST["pg"]) <= 0) { $_REQUEST["pg"] = 1; } $search_results = $systems_primary->search_website("index",urldecode($_REQUEST["search_value"]),"<div class=\"listing ui-corner-all\"><a href=\"{ENTRY_URL}\" title=\"{ENTRY_TITLE}\" class=\"listing_title\">{ENTRY_TITLE}</a>{ENTRY_CONTENT} <a href=\"{ENTRY_URL}\" title=\"{ENTRY_TITLE}\" style=\"font-size:.8em;\">...read more</a></div><br /><br />",345,"all",10,$_REQUEST["pg"]); } // -- End Search ---------------------------------------------------------------------------------------- ?> THE LOST CSS (could be more): html { background-color:#F6E6C8; font-size:16px; font:Helvetica; } body { width:1027px; margin:0 auto; background-color:#ffffff; font: arial, times new roman, sans-serif; }

    Read the article

  • how to start LXDE session automatically after tightvncserver starts to make me able see desktop when connecting to the host via vncclient?

    - by Oleksandr Dudchenko
    I have system which is equipped with Intel Celeron processor 1.1 GHz s370 with 384 Mb of RAM on Intel d815egew motherboard which supports wake-on-lan function. I want to use such a PC for Internet sharing to the local network. Also this PC is a DHCP+DNS server as well as router/gateway. Based on above I decided to install Lubuntu as it is lightweight system. I installed Lubuntu 10.04.4 LTS from alternate ISO. System has no auto login. System boots and has acceptable performance. Host PC has onboard 4 network adapters: eth0 – ethernet controller which is used for Local Network connections. Has static address 10.0.0.1 eth1 – ethernet controller which is not used and not configured so far, I plan to connect printer here later on. eth2 - ethernet controller which is used to connect to Internet, which we plan to share for the local network wlan0 – wireless controller, it is used in role of access poit for local Network and has address 10.0.0.2 We want to control our gateway remotely. So, we need to be able to power it on remotely. To allow this I’ve done the following things: $ cd /etc/init.d/ made a new file with command $ sudo vim wakeonlanconfig Wrote the following lines to the newly created file, saved and closed it #!/bin/bash ethtool -s eth0 wol g ethtool -s eth2 wol g exit Made the abovementioned file executable $ sudo chmod a+x wakeonlanconfig Then included it into autostart sequence during boot. $ sudo update-rc.d -f wakeonlanconfig defaults after system reboot we will be able to poweron system remotely. Than we need to have a possibility to connect remotely to the host via SSH and VNC. So, I installed following packets with the following commands: $ sudo apt-get update $ sudo apt-get install openssh-server tightvncserver Add ssh daemon into autostart sequence during boot. $ sudo update-rc.d -f ssh defaults Power off the host PC $ sudo halt Then I went to remote place, send magic paket and powered the Host up. System started... And I connected to the host via Putty from remote system under Windows. Than logged in and run the command to start vnc server. $ tightvncserver -geometry 800x600 -depth 16 :2 VNC server successfully started and I got message like follows. New 'X' desktop is gateway:2 Starting applications specified in /home/dolv/.vnc/xstartup Log file is /home/dolv/.vnc/gateway:2.log Using UltraVNC Viewer programm under windows I connected to the host's vnc server, enterd the password and.... sow only mouse cursor in form of cross on a grey background of 800x600 dots, no desktop. Here is my .vnc/xstartup file #!/bin/sh xrdb $HOME/.Xresources xsetroot -solid grey #x-terminal-emulator -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & #x-window-manager & # Fix to make GNOME work export XKL_XMODMAP_DISABLE=1 /etc/X11/Xsession The Question: What I have to change and where to make LXDE session start automatically after tightvncserver starts?

    Read the article

  • Questions re: Eclipse Jobs API

    - by BenCole
    Similar to http://stackoverflow.com/questions/8738160/eclipse-jobs-api-for-a-stand-alone-swing-project This question mentions the Jobs API from the Eclipse IDE: ...The disadvantage of the pre-3.0 approach was that the user had to wait until an operation completed before the UI became responsive again. The UI still provided the user the ability to cancel the currently running operation but no other work could be done until the operation completed. Some operations were performed in the background (resource decoration and JDT file indexing are two such examples) but these operations were restricted in the sense that they could not modify the workspace. If a background operation did try to modify the workspace, the UI thread would be blocked if the user explicitly performed an operation that modified the workspace and, even worse, the user would not be able to cancel the operation. A further complication with concurrency was that the interaction between the independent locking mechanisms of different plug-ins often resulted in deadlock situations. Because of the independent nature of the locks, there was no way for Eclipse to recover from the deadlock, which forced users to kill the application... ...The functionality provided by the workspace locking mechanism can be broken down into the following three aspects: Resource locking to ensure multiple operations did not concurrently modify the same resource Resource change batching to ensure UI stability during an operation Identification of an appropriate time to perform incremental building With the introduction of the Jobs API, these areas have been divided into separate mechanisms and a few additional facilities have been added. The following list summarizes the facilities added. Job class: support for performing operations or other work in the background. ISchedulingRule interface: support for determining which jobs can run concurrently. WorkspaceJob and two IWorkspace#run() methods: support for batching of delta change notifications. Background auto-build: running of incremental build at a time when no other running operations are affecting resources. ILock interface: support for deadlock detection and recovery. Job properties for configuring user feedback for jobs run in the background. The rest of this article provides examples of how to use the above-mentioned facilities... In regards to above API, is this an implementation of a particular design pattern? Which one?

    Read the article

  • Generating Report for NUnit

    - by thangchung
     All source codes for this post can be found at my github.Time ago, I received a request that people ask me how they can generate reports of the results of testing using NUnit? In fact, I may never do this. In the little world of my programming, I only care about the test results, red-green-refactoring, and that was it. When I got that question quite a bit unexpected, I knew that I could use NCover to generate reports, but reports of NCover too simple, it did not give us more details on the number of test cases, test methods, ... And I began to see about creating interesting report for NUnit.I was lucky to find an open source here. Its authors call it NUnit2Report, but one disadvantage is it only running on .NET 1.0. Indeed too old compared to the current version 4.0. And I try to download the preview, but I could not run. I had to open its source code and found that it uses XSLT to convert the output of NUnit results from XML to HTML. Nothing really special, because I also knew that after NUnit run output file extension is XML is created. Author only use this file to convert to HTML using XSLT. And I decided to convert it to. NET 4.0, because I will not have to code from scratch. Conversion work made me take some time, but was lucky that I finally have what I want. Thanks Gilles for the this OSS. I will send a mail to thank him for his efforts but put this out for the OSS. Now I will show people how to do it. I used the auto built NAnt and NUnit for running TestCase, and I use Selenium testing framework. After writing three TestCase using Selenium, I ran NUnit, and got the following results: There are 1 fail and 2s success. In the bin directory of this project will have the NUnit output file as shown below: Then I create a build file, and a bat file for easy running (can use PowerShell is here also.) Double click in the bat file to create a report like this:       Finally open the index.html file in the folder to view report. As everyone can see, it is the TestCase and divide very clearly, that I meet the requirements. This is really good. Once again I really thank NUnit2Report from Gilles. People can contact him via the mail address [email protected] or website  http://nunit2report.sourceforge.net. It really is useful to those who promised to QA. Hopefully this post will help anyone really interested in doing reports for NUnit.   

    Read the article

  • Preview Links and Images in Google Chrome

    - by Asian Angel
    Anyone who has used the CoolPreviews extension in Firefox knows how wonderful that preview window can be. Now you can get the same kind of functionality in Chrome with the ezLinkPreview extension. Note: Extension will not work on websites containing “frame buster” code (navigation to the actual URL will occur). Before Normally if you want to have a better look at a particular webpage the only option you have is to go ahead and open it in a new tab or window. But it would certainly be nice to be able to take a quick “sneak peek” before-hand… After As soon as you have finished installing the extension everything is ready to go…just refresh any pages open prior to installation and enjoy the preview goodness. When you hover your mouse near any link you will notice a small “Preview Button” appear with the letters “EZ” inside. A closer look at the “Preview Button”. Click on the “Preview Button” to open the popup window. Now you can get a very good idea of whether the page is worth visiting or not. Here is a closer look at the popup window. Notice that you can see the URL for the webpage and access a convenient set of buttons on the right side (Open to new tab, Pin to keep overlay open, and Close). You can even resize the window as desired to best suit your needs (you can actually grab any of the four corners to resize the popup window). It is also possible to open a “preview window” inside the popup window…you can see the “Preview Button” here… If you have Chrome maximized you can enjoy using a large sized “preview window”. Now that is nice! For those who may be curious you can see that ezLinkPreview works nicely with images too. Conclusion The ezLinkPreview extension provides a quick and simple way to preview links and/or images while you are browsing. If you are looking for similar functionality in Firefox then be sure to read our article on CoolPreviews here. Links Download the ezLinkPreview extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Google Image Search Quick FixSubscribe to RSS Feeds in Chrome with a Single ClickFind a Website’s Actual Location with Chrome FlagsHow to Make Google Chrome Your Default BrowserEnable Auto-Paging Goodness in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • Is there an IDE that can simplify the process of creating a game matchmaking website?

    - by Scott
    Yes, I'm an old guy. And I'm well versed in "C" and have written several games which I have been selling on the web for a number of years. And now, I would like to adapt one of my games to be "online". Sounds simple. I'm sure I can use the thousands of lines of "C" code that I've already written. Right? So my initial investigation begins. First, I think I'll need a server program that lives on a dedicated server (or a VPS probably) that talks to a bunch of client applications that live on individual devices around the world. I can certainly handle that! (I think to myself). I'll break up my existing game into two pieces, a client piece that is just the game displays and buttons, and a server piece that does everything else. Piece of cake, right? But that means that the "server piece" must be executed on a remote machine somewhere and run 24/7. Can I do that? [apparently, that question is so basic, so uneducated, and so lame, that nobody has ever posed it before. Because hours of Googling does not yield an answer. Fine. I'll assume I can do that and move on.] I'll need a "game room", which to me means a website where you log in and then go to a lobby of some kind where you can setup your preferences, see if any of your friends are connected, and create or join games. Should be easy, but it's not. No way. Can I do all this with my local website builder? (which happens to be 90 Second Website Builder, a nice product, btw). It turns out, I can not. I can start with that, but must modify each page, so I can interact with my sql database. So I begin making each page a "PHP" page and dynamically modifying the HTML code with PHP code. I'm already starting to get a headache. Because the resulting web pages looked terrible, I began looking at using JQuery. I want to user a JQuery dialog on my website to display a list of friends and allow the user to select one to invite to the game. [google search for "how to populate a JQuery dialog from a sql database" yields nothing but more confusion.] Javascript? Java? HTML? XML? HTML5? PHP? JQuery? Flash? Sockets? Forms? CSS? Learning about each one of these, and how they interact with each other and/or depend on each other is too much for my feeble old brain. Can anyone simplify this process for me? Is there an IDE that will help me do all this without having to go back to college for a few years? Thanks, Scott

    Read the article

  • Stop YouTube Videos from Automatically Playing in Chrome

    - by The Geek
    If you’ve actually used the internet before, you’ve probably come across a page with an auto-playing YouTube clip, and chances are good it was a rather annoying one. Here’s how to stop them from starting automatically in Chrome. We’ve already told you how to stop them from automatically playing if you’re a Firefox user (best answer: use Flashblock!), but now it’s time for Chrome users to get their turn. Use the Stop Autoplay for YouTube Extension The great thing about this extension is that it stops the video from playing, but it allows it to continue buffering, so when you do feel like playing the video, it’ll already be downloaded—really useful for people with slower internet connections. There’s no UI or anything fancy, just head to the extension page and click the Install button. If you want to get rid of it later, use the Tools –> Extensions menu (or you can type chrome://extensions/ into your address bar), and then click the Uninstall link for that add-on.   Download Stop Autoplay for YouTube [Google Chrome Extensions] Using FlashBlock for Chrome If you really wanted to, you could just disable Flash across the board using FlashBlock for Chrome. Once you’ve installed the extension, you won’t see any Flash elements anywhere, and you’ll have to move your mouse over them and click to enable them each time. When I installed the extension the first time, I noticed that YouTube was already in the allow list. I’m not sure if that’s the default setting or not, but you can use the icon in the address bar, or the Options from the Extensions panel to get to the settings page, and from there you can remove anything from the White List that you wouldn’t want. Another nice feature about Flash Block is that it can also block Silverlight, or you could simply uninstall or remove unnecessary Chrome plug-ins. Download FlashBlock for Chrome Similar Articles Productive Geek Tips Stop YouTube Videos from Automatically Playing in FirefoxDisable YouTube Comments while using ChromeApologies About An Awful Audio AdvertisementImprove YouTube Video Viewing in Google ChromeWatch YouTube Videos in Cinema Style in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • Do you care about your Oracle System Support experience?

    - by user12244613
    It has been a while since I blogged about Systems Support within Oracle. I want to take this opportunity to raise awareness of how Oracle is communicating out to its systems customers. Previously every item to be communicated was sent independently via an email message however, not all messages appear to be being getting the attention they require. In an effort to ensure Oracle is reaching all of our Sun and Oracle System customers, we have created the Oracle Systems Support Newsletter. This monthly newsletter will have a summary of customer support relevant information for you to use and will cover topics that impact your support experience. For example: 1. Did you know that sending explorer content to email addresses with @sun.com is going away soon? For more information, review the Document 1362484.1 2. Are you an Auto Service Request (ASR) user? If yes, here are the latest changes: · ASR Manager accepts My Oracle Support User Name (email address) and password. [Doc ID 1345484.1] · ASR IP Address for secure file transfer has changed [Doc ID 1338575.1] · ASR No Heartbeat Status - Find out how to resolve [Doc ID 1346328.1] 3. Did you notice we have changed the Service Request options for Hardware and introduced a new problem category called “Automated Diagnosis”? This service streamlines the data you send in and then automatically provides an update of known issues found in your My Oracle Support Service Request. This feature also fast tracks hardware failures by sending parts as soon as the data is analyzed. Have you used this new feature? If yes tell us about it – take the 5minute survey 4. Are you being proactive or are you still ‘fire fighting’ in the reactive mode? If you are being proactive for your Oracle System products you might have used Oracle Sun System Analysis. Did you finding this helpful? Can we improve it? You tell us, take the 5minute survey 5. Are you aware that if you attach files to your Service Request it enables the support engineer to start work straight away? For a summary of products and files review the Newsletter. 6. Are you struggling to find patches or firmware or product downloads? If yes, these types of issues are all addressed in the Newsletter. If this is the type of information you want to know about each month, then take time to read the Newsletter link and bookmark it in My Oracle Support so you can stay informed. Thanks for your time.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >