Search Results

Search found 1675 results on 67 pages for 'richard j ross iii'.

Page 62/67 | < Previous Page | 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

  • Oracle OpenWorld Preview: Real World Perspectives from Oracle WebCenter Customers

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} If you frequent the Oracle WebCenter blog you’ve probably read a lot about the customer experience revolution over the last few months.  An important aspect of the customer experience revolution is the increasing role that peers play in influencing how others perceive a product, brand or solution, simply by sharing their own, real-world experiences.  Think about it, who do you trust more -- marketers and sales people pitching polished messages or peers with similar roles and similar challenges to the ones you face in your business every day? With this spirit in mind, this polished marketer personally invites you to hear directly from Oracle WebCenter customers about their real-life experiences during our customer panel sessions at Oracle OpenWorld next week.  If you’re currently using WebCenter, thinking about it, or just want to find out more about best practices in social business, next-generation portals, enterprise content management or web experience management, be sure to attend these sessions: CON8899 - Becoming a Social Business: Stories from the Front Lines of Change Wednesday, Oct 3, 11:45 AM - 12:45 PM - Moscone West - 3000Priscilla Hancock - Vice President/CIO, University of Louisville Kellie Christensen - Director of Information Technology, Banner EngineeringWhat does it really mean to be a social business? How can you change your organization to embrace social approaches? What pitfalls do you need to avoid? In this lively panel discussion, customer and industry thought leaders in social business explore these topics and more as they share their stories of the good, the bad, and the ugly that can happen when embracing social methods and technologies to improve business success. Using moderated questions and open Q&A from the audience, the panel discusses vital topics such as the critical factors for success, the major issues to avoid, how to gain senior executive support for social efforts, how to handle undesired behavior, and how to measure business impact. This session will take a thought-provoking look at becoming a social business from the inside. CON8900 - Building Next-Generation Portals: An Interactive Customer Panel DiscussionWednesday, Oct 3, 5:00 PM - 6:00 PM - Moscone West - 3000Roberts Wayne - Director, IT, Canadian Partnership Against CancerMike Beattie - VP Application Development, Aramark Uniform ServicesJohn Chen - Utilities Services Manager 6, Los Angeles Department of Water & PowerJörg Modlmayr - Head of Product Managment, Siemens AGSocial and collaborative technologies have changed how people interact, learn, and collaborate, and providing a modern, social Web presence is imperative to remain competitive in today’s market. Can your business benefit from a more collaborative and interactive portal environment for employees, customers, and partners? Attend this session to hear from Oracle WebCenter Portal customers as they share their strategies and best practices for providing users with a modern experience that adapts to their needs and includes personalized access to content in context. The panel also addresses how customers have benefited from creating next-generation portals by migrating from older portal technologies to Oracle WebCenter Portal. CON8898 - Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM NirvanaThursday, Oct 4, 12:45 PM - 1:45 PM - Moscone West - 3001Stephen Madsen - Senior Management Consultant, Alberta Agriculture and Rural DevelopmentHimanshu Parikh - Sr. Director, Enterprise Architecture & Middleware, Ross Stores, Inc.Ten years ago, people were predicting that by this time in history, we’d be some kind of utopian paperless society. As we all know, we're not there yet, but are we getting closer? What is keeping companies from driving down the road to enterprise content management bliss? Most people understand that using ECM as a central platform enables organizations to expedite document-centric processes, but most business processes in organizations are still heavily paper-based. Many of these processes could be automated and improved with an ECM platform infrastructure. In this panel discussion, you’ll hear from Oracle WebCenter customers that have already solved some of these challenges as they share their strategies for success and roads to avoid along your journey. CON8897 - Using Web Experience Management to Drive Online Marketing SuccessThursday, Oct 4, 2:15 PM - 3:15 PM - Moscone West - 3001Blane Nelson - Chief Architect, Ancestry.comMike Remedios - CIO, ArbonneCaitlin Scanlon - Product Manager, Monster WorldwideEvery year, the online channel becomes more imperative for driving organizational top-line revenue, but for many companies, mastering how to best market their products and services in a fast-evolving online world with high customer expectations for personalized experiences can be a complex proposition. Come to this panel discussion, and hear directly from customers on how they are succeeding today by using Web experience management to drive marketing success, using capabilities such as targeting and optimization, user-generated content, mobile site publishing, and site visitor personalization to deliver engaging online experiences. Your Handy Guide to WebCenter at Oracle OpenWorld Want a quick and easy guide to all the keynotes, demos, hands-on labs and WebCenter sessions you definitely don't want to miss at Oracle OpenWorld? Download this handy guide, Focus on WebCenter. More helpful links: * Oracle OpenWorld* Oracle Customer Experience Summit @ OpenWorld* Oracle OpenWorld on Facebook * Oracle OpenWorld on Twitter* Oracle OpenWorld on LinkedIn* Oracle OpenWorld Blog

    Read the article

  • Introducing Microsoft SQL Server 2008 R2 - Business Intelligence Samples

    - by smisner
    On April 14, 2010, Microsoft Press (blog | twitter) released my latest book, co-authored with Ross Mistry (twitter), as a free ebook download - Introducing Microsoft SQL Server 2008 R2. As the title implies, this ebook is an introduction to the latest SQL Server release. Although you'll find a comprehensive review of the product's features in this book, you will not find the step-by-step details that are typical in my other books. For those readers who are interested in a more interactive learning experience, I have created two samples file for download: IntroSQLServer2008R2Samples project Sales Analysis workbook Here's a recap of the business intelligence chapters and the samples I used to generate the screen shots by chapter: Chapter 6: Scalable Data Warehousing covers a new edition of SQL Server, Parallel Data Warehouse. Understandably, Microsoft did not ship me the software and hardware to set up my own Parallel Data Warehouse environment for testing purposes and consequently you won't see any screenshots in this chapter. I received a lot of information and a lot of help from the product team during the development of this chapter to ensure its technical accuracy. Chapter 7: Master Data Services is a new component in SQL Server. After you install Master Data Services (MDS), which is a separate installation from SQL Server although it's found on the same media, you can install sample models to explore (which is what I did to create screenshots for the book). To do this, you deploying packages found at \Program Files\Microsoft SQL Server\Master Data Services\Samples\Packages. You will first need to use the Configuration Manager (in the Microsoft SQL Server 2008 R2\Master Data Services program group) to create a database and a Web application for MDS. Then when you launch the application, you'll see a Getting Started page which has a Deploy Sample Data link that you can use to deploy any of the sample packages. Chapter 8: Complex Event Processing is an introduction to another new component, StreamInsight. This topic was way too large to cover in-depth in a single chapter, so I focused on information such as architecture, development models, and an overview of the key sections of code you'll need to develop for your own applications. StreamInsight is an engine that operates on data in-flight and as such has no user interface that I could include in the book as screenshots. The November CTP version of SQL Server 2008 R2 included code samples as part of the installation, but these are not the official samples that will eventually be available in Codeplex. At the time of this writing, the samples are not yet published. Chapter 9: Reporting Services Enhancements provides an overview of all the changes to Reporting Services in SQL Server 2008 R2, and there are many! In previous posts, I shared more details than you'll find in the book about new functions (Lookup, MultiLookup, and LookupSet), properties for page numbering, and the new global variable RenderFormat. I will confess that I didn't use actual data in the book for my discussion on the Lookup functions, but I did create real reports for the blog posts and will upload those separately. For the other screenshots and examples in the book, I have created the IntroSQLServer2008R2Samples project for you to download. To preview these reports in Business Intelligence Development Studio, you must have the AdventureWorksDW2008R2 database installed, and you must download and install SQL Server 2008 R2. For the map report, you must execute the PopulationData.sql script that I included in the samples file to add a table to the AdventureWorksDW2008R2 database. The IntroSQLServer2008R2Samples project includes the following files: 01_AggregateOfAggregates.rdl to illustrate the use of embedded aggregate functions 02_RenderFormatAndPaging.rdl to illustrate the use of page break properties (Disabled, ResetPageNumber), the PageName property, and the RenderFormat global variable 03_DataSynchronization.rdl to illustrate the use of the DomainScope property 04_TextboxOrientation.rdl to illustrate the use of the WritingMode property 05_DataBar.rdl 06_Sparklines.rdl 07_Indicators.rdl 08_Map.rdl to illustrate a simple analytical map that uses color to show population counts by state PopulationData.sql to provide the data necessary for the map report Chapter 10: Self-Service Analysis with PowerPivot introduces two new components to the Microsoft BI stack, PowerPivot for Excel and PowerPivot for SharePoint, which you can learn more about at the PowerPivot site. To produce the screenshots for this chapter, I created the Sales Analysis workbook which you can download (although you must have Excel 2010 and the PowerPivot for Excel add-in installed to explore it fully). It's a rather simple workbook because space in the book did not permit a complete exploration of all the wonderful things you can do with PowerPivot. I used a tutorial that was available with the CTP version as a basis for the report so it might look familiar if you've already started learning about PowerPivot. In future posts, I'll continue exploring the new features in greater detail. If there's any special requests, please let me know! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Looking for home networking hardware and software advice

    - by phobos7
    Note: I originally wrote this up in a blog post. I've removed any affiliate links that I put in my original post to ensure I don't annoy anybody. I've recently moved home and I now need to go to the trouble of sorting out my home network yet again. We had Virgin broadband in Hertford but you can't get Virgin in the street we've moved to so I've had to go with O2 Broadband. Normally I prefer to use my own hardward, and previously used the DLink DIR-655 router which was great, but in this situation I am using the O2 Wirelss Box III since I only have an old Netgear DG834PN Wireless G modem router and I'd rather be using Wireless N. Anyway, the place we have moved into has only one phone point in the hallway, has the best TV point in one room and the best place to put the TV and other entertainment stuff in yet another room. So, networking the house up for Internet and TV is required. The diagram below shows the things that I'll have in my home network but there are three points where I'm not quite sure what hardware to us. Wireless Access Point/Bridge, that acts only as a wireless to wire bridge and not an AP, that links up a Media Centre/PC and a couple of consoles to the network. I'm pretty much settled on us an Acer Aspire Revo R3600 as my media PC, probably with Ubuntu or Windows and XBMC installed. Wireless Access Point/Bridge, that acts only as a wireless to wire bridge and not an AP, that links up a device that can decode and stream TV from a TV aerial across the network. The device that is connected to 2). At the moment I'm considering a HDHomeRun by SiliconDust. At the moment I'm considering either the TP LINK TL-WA701ND 150Mbps Wireless Lite N Access Point (very cheap at Amazon) or the Netgear 5 GHz Wireless-N HD Access Point/Bridge. I'd love to get some insight into what you would do in my situation. What Wireless Access Point/Bridge should I put at points 1) and 2)? What device should I choose for point 3) that can decode and stream a TV signal? Is the Acer Aspire Revo R3600 a good choice? ![alt text][6] Note 2: I've also posted this question on AVForums.

    Read the article

  • Mounting a drive in Ubuntu 9.10 (Karmic Koala)

    - by morpheous
    I have just installed Ubuntu on a machine that previously had XP installed on it. The machine has 2 HDD (hard disk drives). I opted to install Ubuntu completely over XP. I am new to Linux, and I am still learning how to navigate teh file structure. However, AFAICT), there is only one drive. I want to be able to store programs etc on the first drive, and store data (program output etc) on the second drive. It appears Ubuntu is not aware that I have 2 drives (on XP, these were drives C and D). How can I mount the second drive (ideally, I want to do this automatically on login, so that the drive is available to me whenever I login - withou manual intervention from me) In XP, I could refer to files on a specific drive by prefixing with the drive letter (e.g. c:\foobar.cpp and d:\foobar.dat). I suspect the notation on ubuntu is different. How may I specify specific files on different drives? Last but notbthe least (a bit unrelated to previous questions). This relates to direcory structure again. I am a developer (C++ for desktops and PHP for websites), I want to install the following apps/ libraries. i). Apache 2.2 ii). PHP 5.2.11 iii). MySQL (5.1) iv). SVN v). Netbeans vi). C++ development tools (gcc, gdb, emacs etc) vii). QT toolkit viii). Some miscellaeous scientific software (e.g. www.r-project.org, www.gnu.org/software/octave/) I would be grateful if a someone can recommend a directory layout for these applications. Regarding development, I would also be grateful if someone could point out where to store my project and source files i.e: (i) *.cpp, *.hpp, *.mak files for cpp projects (ii) individual websites On my XP machine the layout for C++ dev was like this: c:\dev\devtools (common libs and headers etc) c:\dev\workarea (root folder for projects) c:\dev\workarea\c++ (c++ projects) c:\dev\workarea\websites (web projects) I would like to create a similar folder structure on the linux machine, but its not clear whether to place these folders under /, /usr, /home or swomewhere else (there seems to be abffling number of choices, so I want to get it "right" first time - i.e having a directory structure that most developer use, so it is easier when communicating with other ubuntu/linux developers)

    Read the article

  • SSD Fresh Does Not Start

    - by Jim Fell
    I recently installed a new 60GB SSD as my primary hard drive and re-installed Windows 7 Professional 64-bit. I then installed SSD Fress from Abelssoft to optimize Windows to run on the SSD. It seemed to install okay, but when I try to run the utility, its splash screen appears briefly before it quietly closes. No errors are displayed; the utility just fails to launch. I have run SSD Fresh on another SSD-equipped Windows 7 Pro x64 computer in the past without any problems. Does anyone know what might be preventing the program from running? I tried shutting down the Spybot Resident and disabling the firewall and virus scanner with no luck. I also tried running the tool as administrator; I even tried reinstalling it, running the installer as administrator. No luck. Every time I try to launch the program the Event Viewer logs this same set of errors: Error 4/2/2012 11:35:44 PM Application Error 1000 (100) Error 4/2/2012 11:35:43 PM .NET Runtime 1026 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None For those who are interested, here is my system configuration: ASRock M3A770DE AM3 AMD 770 ATX AMD Motherboard AMD Athlon II X3 455 Rana 3.3GHz Socket AM3 95W Triple-Core Desktop Processor ADX455WFGMBOX G.SKILL Value Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Desktop Memory Model F3-10600CL9D-8GBNT Mushkin Enhanced Chronos Deluxe MKNSSDCR60GB-DX 2.5" 60GB SATA III Synchronous MLC Internal Solid State Drive (SSD) (Primary/Boot HD) Western Digital Caviar Blue RFHWD1600AAJS 160GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive (Secondary HD) Sony Optiarc CD/DVD Burner Black SATA Model AD-7261S-0B LightScribe Support RAIDMAX RX-850AE 850W ATX12V v2.3 / EPS12V SLI Certified CrossFire Ready 80 PLUS GOLD Certified Modular Active PFC Power Supply ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card Asus ML228H 21.5" Full HD LED BackLight LED Monitor Slim Design (x3)

    Read the article

  • AWS EC2 instance not pingable or available in browser

    - by Slimmons
    I've seen this questions asked other places, but now I've run through every fix proposed in other questions so I'm re asking it here, in hopes that someone will have a different solution. Problem: I have a EC2 instance, and I can ssh into it and work on it, and I have a Elastic ip set to it. I am unable to ping this machine, or log in to it using my browser. Solutions mentioned and tried: service httpd start i. response I get is "unrecognized service" ii. when I run apache2ctl -k start, it shows "httpd already running", so I'm assuming httpd is not the problem, it's just possibly named something else because of apache2, or for whatever reason. I went into EC2-Security Group- Default (which is the one I used.)-inbound, and everything there is set up correctly (I'm assuming). There it shows 80(HTTP) 0.0.0.0/0. 443(HTTPS) 0.0.0.0/0, and various other servies with their ports and 0.0.0.0/0 next to them. I also enabled a rule for enabling ICMP Request All on 0.0.0.0/0 temporarily for testing purposes I've tried disabling the iptables with "service ufw stop" Just in case I'm doing something really stupid, because I'm not all that used to connecting to web servers that I've spun up, I'm typing in the address to the machine into the url like this (assuming my ip address was ip.address). i. http:/(slash)ip.address/ ii. ip.address iii. https:/(slash)ip.address/ iv. ip.address/webFolderName/ v. http:/(slash)ip.address/webFolderName/ None of the attempts worked, and the only thing I haven't tried that i've seen is to start wireshark on the machine, and see if the requests are reaching it, and it's just ignoring them. I'm not sure I want to do that yet, since A). I'm not 100% positive how to use wireshark without the gui, since it's the only way I've ever used it (I really should get used to it in terminal, but I didn't even know you could). B). It really seems like I'm missing something simple in getting this to work. Thanks in advance for any help.

    Read the article

  • Crashes and freezes after fixing "BOOTMGR is missing" error

    - by Greg-J
    I came back from a 3-day weekend to a computer that was off. I leave my PC on 24/7, so this was odd. Turn it on to get the dreaded "BOOTMGR is missing" screen. Two attempts at Windows Recovery and it booted into Windows fine. After an hour or so, I get a frozen Chrome and my start bar disappears. Ctrl+Alt+Del brings up an error box telling me that Ctrl+Alt+Del failed to work properly. Clicking on any open application triggers an error (I can't recall the error now, but it essentially just said that the application couldn't be found running or something along those lines). I restart, and again, the same thing happens after a while of use. I turn it on, install the 47 updates I have or so, and then restart it. After a while of use (under an hour), it just freezes completely. My thoughts are: SSDs, RAM or PS. My system specs below: (RAID0) 2 x Crucial M4 CT128M4SSD2 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD) CORSAIR Vengeance 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CML16GX3M4A1600C9 CORSAIR HX Series HX750 750W ATX12V 2.3 / EPS12V 2.91 SLI Ready CrossFire Ready 80 PLUS GOLD Certified Modular Active 1 x ASUS Maximus IV Gene-Z/GEN3 LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard 1 x Hitachi GST Deskstar 7K1000.C 0F10383 1TB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive 1 x Intel Core i7-2600K Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor Intel HD Graphics 1 x SAPPHIRE 21197-00-40G Radeon HD 7970 3GB 384-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card 1 x Noctua NH-D14 120mm & 140mm SSO CPU Cooler This is all crammed in a pretty small case (NZXT Vulcan) and has been running perfectly problem-free since January. The only thing out of the ordinary is that there is a fan in the case that is now making noise whereas the case has previously been completely silent. I have no reason to believe this is anything more then correlation, but felt it is worth mentioning. I believe it MAY be the SSDs simply because of the BOOTMGR error, but not sure how to test that theory. My belief that it may be the RAM is simply from experience with frozen machines. I haven't had the time to memtest it, but will. The PS being the culprit is something I've picked up by reading similar threads on various forums, and it seems plausible. I am unsure how to test this though. ANY insight whatsover would be greatly appreciated!

    Read the article

  • Web application/ site service (like Google App Engine) for PHP/ MySQL and Postgres

    - by Simon
    I would like to find a service similar to Google App Engine for PHP/ MySQL/ Postgres sites/ applications. We host two different types of site. i). PHP/ Mysql/ Zend Framework <VirtualHost *:80> DocumentRoot "/home/websites/website.com/public" ServerName website.com # This should be omitted in the production environment SetEnv APPLICATION_ENV development <Directory "/home/websites/website.com/public"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] </Directory> </VirtualHost> ii). Matrix CMS - PHP/ Postgres + loads of pear classes <VirtualHost *:80> ServerName server.example.com DocumentRoot /home/websites/mysource_matrix/core/web Options -Indexes FollowSymLinks <Directory /home/websites/mysource_matrix> Order deny,allow Deny from all </Directory> <DirectoryMatch "^/home/websites/mysource_matrix/(core/(web|lib)|data/public|fudge)"> Order allow,deny Allow from all </DirectoryMatch> <DirectoryMatch "^/home/websites/mysource_matrix/data/public/assets"> php_flag engine off </DirectoryMatch> <FilesMatch "\.inc$"> Order allow,deny Deny from all </FilesMatch> <LocationMatch "/(CVS|\.FFV)/"> Order allow,deny Deny from all </LocationMatch> Alias /__fudge /home/websites/mysource_matrix/fudge Alias /__data /home/websites/mysource_matrix/data/public Alias /__lib /home/websites/mysource_matrix/core/lib Alias / /home/websites/mysource_matrix/core/web/index.php/ </VirtualHost> My key requirements are: I don't want to worry/ know/ care about the server/ infrastructure Secure/ up to date software/ os Good monitoring Automatic scalability SLA I apologise for the length of the question. In short all I want to do is i). create vhost, ii). create db iii). install app/ site iv). relax. Thanks. Edit: I include the Matrix vhost because that is the only complication that I cannot really do via a .htaccess file.

    Read the article

  • Explanation of the init.d/scripts Fedora

    - by Shahmir Javaid
    Below is a copy of vsftpd, i need some explanations of some of the scripts mentioned below in this script: #!/bin/bash # ### BEGIN INIT INFO # Provides: vsftpd # Required-Start: $local_fs $network $named $remote_fs $syslog # Required-Stop: $local_fs $network $named $remote_fs $syslog # Short-Description: Very Secure Ftp Daemon # Description: vsftpd is a Very Secure FTP daemon. It was written completely from # scratch ### END INIT INFO # vsftpd This shell script takes care of starting and stopping # standalone vsftpd. # # chkconfig: - 60 50 # description: Vsftpd is a ftp daemon, which is the program \ # that answers incoming ftp service requests. # processname: vsftpd # config: /etc/vsftpd/vsftpd.conf # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network RETVAL=0 prog="vsftpd" start() { # Start daemons. # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 [ -x /usr/sbin/vsftpd ] || exit 1 if [ -d /etc/vsftpd ] ; then CONFS=`ls /etc/vsftpd/*.conf 2>/dev/null` [ -z "$CONFS" ] && exit 6 for i in $CONFS; do site=`basename $i .conf` echo -n $"Starting $prog for $site: " daemon /usr/sbin/vsftpd $i RETVAL=$? echo if [ $RETVAL -eq 0 ]; then touch /var/lock/subsys/$prog break else if [ -f /var/lock/subsys/$prog ]; then RETVAL=0 break fi fi done else RETVAL=1 fi return $RETVAL } stop() { # Stop daemons. echo -n $"Shutting down $prog: " killproc $prog RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog return $RETVAL } # See how we were called. case "$1" in start) start ;; stop) stop ;; restart|reload) stop start RETVAL=$? ;; condrestart|try-restart|force-reload) if [ -f /var/lock/subsys/$prog ]; then stop start RETVAL=$? fi ;; status) status $prog RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|try-restart|force-reload|status}" exit 1 esac exit $RETVAL Question I What the hell is the difference between the && and || signs in the below commands, and is it just an easy way to do a simple if check or is it completely different to a if[..something..]; then ..something.. fi: # Check that networking is up. [ ${NETWORKING} = "no" ] && exit 1 [ -x /usr/sbin/vsftpd ] || exit 1 Question II i get what -eq and -gt is (equal to, greater than) but is there a simple website that explains what -x, -d and -f are? Any help would be apreciated Running Fedora 12 on my OS. Script copied from /etc/init.d/vsftpd Question III It says required starts are $local_fs $network $named $remote_fs $syslog but i cant see any where it checks for those.

    Read the article

  • Real-time offline folder-to-folder backup application needed (Windows)

    - by niktech
    I recently started using Intel Matrix Storage RAID solution that allowed me to use my 5 1TB drives for two RAID volumes. First one a 1TB RAID 0 striped across all 5 drives and second one a RAID 5 across the rest of the free space on all drives (around 2.85TB usable space). The RAID 0 I use for OS, applications and games while the RAID 5 I use as a more-permanent type storage (photos, etc). Now I do realize that running the OS and applications on RAID 0 across 5 drives is very dangerous, which is what brings up the following question. Is there a reliable freeware realtime backup application that can backup a set of folders from one drive to another drive (no online backups needed)? I've already tried a few (Mozy, Yadis, Comodo Backup, GFI Backup, Idoo, Crash Plan) but none meet my requirements: Low CPU and RAM usage. Realtime Backups - as soon as a file is modified in the source folder, it is added to the backup queue which will be processed with the lowest priority when the CPU is idle. This backup queue should persist in cases of computer restarts (ie: the source and destination folders should always have the same set of files, except for the ones waiting in the backup queue). Incremental Backups - if only 10 bytes changed in a 1GB file, the app should only copy those 10 new bytes. Ability to back up locked and opened files (some apps, like Yadis, can't back up critical files like browser favorites). Ability to run as a service (no need for any user to log-in to have the app started). Optional requirements: Compression of the destination into a well-known format (RAR, Zip) that can be directly read without the use of the application. Preset source folders (such as Browser Favorites, Game Saves, Application Settings, etc). The idea is to use RAID 0 array as "semi-persistent RAM-like" storage which in case of a failure can be quickly rebuilt by reinstalling the OS, apps and games and copying over the settings, saves, favorites from the RAID 5. I'm also thinking of taking this RAID 0 as RAM idea to the extreme with SSDs (as soon as we get some nice 6Gb/s SATA III SSDs out there), where a couple of SSDs chained in RAID 0 will work as yet another semi-persistent cache layer sitting between the RAM and the HD. I'm just hoping there already exists an application that satisfies these requirements... otherwise I'll have to write one myself, which I would prefer not to do.

    Read the article

  • bluetooth connection using pybluez

    - by srj0408
    I am working on bluetooth not exactly on bluetooth stack-development but to use bluetooth in one of my project. I had done all that before using some of the py-bluez commands like hciconfig, hcitool scan , then simple-agents and using serial module inside python. But that was quite random. We were able to connect only one specific device based on its bluetooth address and there was no facility of reconnection once the devices are disconnected. Now i want to try out this stuff in a sequential manner like this (i am doing that all on a RPI and for at present on ubuntu 12.04.) i) Store some names in a file along with some other information with respect to that device. ii) Run a script to find out the device in locality with those names and if any one if found, report that. For this step, i had taken a reference from BTBook , made available from MIT. Below is the script for the same, but that script only search for the single name. from bluetooth import * target_name = "XT1033" target_address = None nearby_devices = discover_devices() for address in nearby_devices: if target_name == lookup_name( address ): target_address = address break if target_address is not None: print "found target bluetooth device with address ", target_address connect_socket(target_address); else: print "could not find target bluetooth device nearby" iii) Connect the device using client sock. But i dont have any device on which i can write a simple python script. My client can be any device that will be publishing data. Now i came through a script in the same book, that actually connect to a client requesting permission to connect to server. from bluetooth import * port = 1 server_sock=BluetoothSocket( RFCOMM ) server_sock.bind(("",port)) server_sock.listen(1) client_sock, client_info = server_sock.accept() print "Accepted connection from ", client_info data = client_sock.recv(1024) print "received [%s]" % data client_sock.close() server_sock.close() here client_sock, client_info = server_sock.accept() provide the client address and port requested to be connected. Can i pass address obtained from the earlier script to this, so that it connect server to the client? iv) Then if client get disconnected, re-connect(a simple polling can be used.) All this stuff can be done using bash and py-bluez functions but i want to do that in a sequential manner.I am not a master in python but i can do some small stuff. Can any one guide me for the same or can direct me to more usefull resource through which i can continue my coding part after finding the "X", "Y" named devices.

    Read the article

  • Best ways to teach a beginner to program?

    - by Justin Standard
    Original Question I am currently engaged in teaching my brother to program. He is a total beginner, but very smart. (And he actually wants to learn). I've noticed that some of our sessions have gotten bogged down in minor details, and I don't feel I've been very organized. (But the answers to this post have helped a lot.) What can I do better to teach him effectively? Is there a logical order that I can use to run through concept by concept? Are there complexities I should avoid till later? The language we are working with is Python, but advice in any language is welcome. How to Help If you have good ones please add the following in your answer: Beginner Exercises and Project Ideas Resources for teaching beginners Screencasts / blog posts / free e-books Print books that are good for beginners Please describe the resource with a link to it so I can take a look. I want everyone to know that I have definitely been using some of these ideas. Your submissions will be aggregated in this post. Online Resources for teaching beginners: A Gentle Introduction to Programming Using Python How to Think Like a Computer Scientist Alice: a 3d program for beginners Scratch (A system to develop programming skills) How To Design Programs Structure and Interpretation of Computer Programs Learn To Program Robert Read's How To Be a Programmer Microsoft XNA Spawning the Next Generation of Hackers COMP1917 Higher Computing lectures by Richard Buckland (requires iTunes) Dive into Python Python Wikibook Project Euler - sample problems (mostly mathematical) pygame - an easy python library for creating games Create Your Own Games With Python ebook Foundations of Programming for a next step beyond basics. Squeak by Example Recommended Print Books for teaching beginners Accelerated C++ Python Programming for the Absolute Beginner Code by Charles Petzold

    Read the article

  • WCF Mono - BasicHttpBinding with SSL

    - by TheNextman
    I'm trying to port an existing WCF client application to run on Linux under Mono. Right now I'm testing everything out, figuring out what works on Mono and what doesn't. The client makes a super simple call over basicHttpBinding. It works great, until I enable SSL (that is, specify BasicHttpSecurityMode.Transport in the binding). Running on .NET in Windows, it works great Running on Mono on Ubuntu 9.10 / Mono 2.6 I get the following error: Exception in async operation: System.Net.WebException: Error getting response stream (Write: The authentication or decryption has failed.): SendFailure --- System.IO.IOException: The authentication or decryption has failed. --- Mono.Security.Protocol.Tls.TlsException: Invalid certificate received from server. Error code: 0xffffffff800b010a I've read the Mono security FAQ here: http://www.mono-project.com/FAQ:_Security; however the SSL certificate on the server is from a root CA (a purchased certificate) - issued by Equifax Secure Certificate Authority. I ran the TlsTest tool on the Ubuntu install against the .svc URL and there are no problems/errors. Also I can hit the service fine in Firefox (no security warnings). What am I missing? Thanks in advance, Richard

    Read the article

  • Linq - how does it work??

    - by clarkeyboy
    Hey, I have just been looking into Linq with ASP.Net. It is very neat indeed. I was just wondering - how do all the classes get populated? I mean in ASP.Net, suppose you have a Linq file called Catalogue, and you then use a For loop to loop through Catalogue.Products and print each Product name. How do the details get stored? Does it just go through the Products table on page load and create another instance of class Product for each row, effectively copying an entire table into an array of class Product? If so, I think I have created a system very much like this, in the sense that there is a SiteContent module with an instance of each Manager class - for example there is UserManager, ProductManager, SettingManager and alike. UserManager contains an instance of the User class for each row in the Users table. They also contain methods such as Create, Update and Remove. These Managers and their "Items" are created on every page load. This just makes it nice and easy to access users, products, settings etc in every page as far as I, the developer, am concerned. Any any subsequent pages I need to create, I just need to reference SiteContent.UserManager to access a list of users, rather than executing a query from within that page (ie this method separates out data access from the workings of the page, in the same way as using code behind separates out the workings of the page from how the page is layed out). However the problem is that this technique seems rather slow. I mean it is effectively creating a database on every page load, taking data from another database. I have taken measures such as preventing, for example, the ProductManager from being created if it is not referenced on page load. Therefore it does not load data into storage when it is not needed. My question is basically whether my technique does the exact same thing as Linq, in the sense of duplicating data from tables into properties of classes.. Thanks in advance for any advice or answers about this. Regards, Richard Clarke

    Read the article

  • Internet Explorer 7 Bugs - incorrect display OR dead links

    - by ClarkeyBoy
    Hi, I recently launched a website I have been developing over the past year - http://Live.heritageartpapers.co.uk/. My dad, who owns the company, had a phone call today saying it doesnt display properly in IE7. Bug #1: The header and footer are both in a div, whereas the content is in a table between the two divs. Reportedly the content (table) sometimes (not always, according to IETester) displays below the footer, but the footer still displays where it is supposed to (ie there is a massive gap where the content should fit). Bug #2: When the content does display in the correct place, all the links on the page are dead - click on them and nothing happens. As you can see if you view it in Firefox (the version I am using is 3.6), the links in the left hand menu turn orange on mouseover. However they do not even do this in IE7. Note that they do turn orange and do work if the content is displayed below the footer. I cant see why its happening - according to IETester, the IE7 interpreted source code has all the tags capitalised and many quotes removed (for example for the id attribute on most, if not all, tags) but I doubt this could cause the above bugs, could it? My question is whether anyone has ever seen any of these problems before, and/or has a solution to any of these problems?? I currently do not have the application open, but will post any relevant code in a few minutes. Alternatively just use view source. Many thanks in advance. Regards, Richard Clarke

    Read the article

  • Postback event not firing on FIRST button click..

    - by ClarkeyBoy
    Hi, I have a form which accepts two arguments. The first one is mode - this is either view, new or edit. If it is new then the second argument is type - this is either range, collection or design. When set to new, and the type is valid, a new instance of that type is created and the data from the form is added to it. The item (range, collection or design) then validates the data. If any of the data is invalid then it throws an error, and this error is displayed at the top of the form telling the user why it is invalid. A variable, _Databind, is set to false so that it does not change the data input by the user (in the form fields). The button used to submit the button is called btnSave, and is created in the html source. The click event is wired up in the form Protected Sub Blah(sender, e) Handles btnSave.Click. Strangely, whenever I edit an item that already exists the form submits fine the first time - the click event is fired. However when in "new" mode I have to click the button twice to fire the event. It also blanks all the form fields out on first click. I have even put a Response.Write("Hello World") line at the start of the click event - this is not being output on first click when adding a new item either. It is on first load when the mode is set to edit however. Does anyone have any ideas as to what is causing it to behave this way? Thanks in advance for any help. Regards, Richard

    Read the article

  • ASP.Net application timeout

    - by ClarkeyBoy
    Hi, I have an application I have just deployed which, for complicated reasons, stores all the data from the database in a module the first time any data from the specific table is required (i.e. when a customer requests to view a product for the first time, all the product data is stored in the ProductManager class (of which an instance is stored in a shared property of the SiteContent class, making the ProductManager easily accessible from any page). Now forget that you are probably now glaring at me for using this approach.. I am sure it has its inefficiencies but I have only been studying .Net for a year or so now so I am still learning. One thing I have noticed is that I can go on the site once, then revisit it 5 minutes later and it will load all the data into the ProductManager class again. It seems this is a .Net application timeout thing - since the session timeout is set to 30 minutes and, when I am logged in on the administration frontend, it logs me out after 5 minutes (ish). Does anyone have any idea how to change this? Is there any way I can change this in the code without having to contact the hosting company? If not in the code is there any way to change this in the web.config? Thanks in advance. Regards, Richard

    Read the article

  • CSS - Class hierarchies???

    - by ClarkeyBoy
    Hi, I have a site with a customer front end, which has a catalogue, homepage, contact page, about us page and so on. There is also an administration front end. I would like to implement a kind of hierarchy where any elements within an element with class "admin" will inherit properties set in the admin stylesheet and anything else inherits from the customer stylesheet. The purpose of this is so that admin can login on the admin front end, where they have access to lots of advanced stuff, but they can also navigate to the customer front end where they can execute basic tasks (such as hiding catalogue items, running a debug script if a customer reports an issue and so on). I would like all the admin tools on the customer front end to have properties taken from the admin stylesheet instead of the customer one - this will change the background colour and stuff. Is there any easy way to set up like namespaces to make things simpler, for example: .admin { .list { .list-subtitle { } .list-item { } } a { } } .customer { .list { .list-subtitle { } .list-item { } } a { } } I know it can be like: .admin .list {} .admin .list .list-item {} .admin a I just dont want to have to keep putting .admin all the time. Does anyone have any suggestions on how I could do this? I suppose I could write a .net class which sets this up and writes a stylesheet according to whats put into it, but then I would not be able to read the styles so easily add there would be all sorts of like Classes.Add(blah) and so on. Thanks in advance for any replies... Regards, Richard

    Read the article

  • Reference for proper handling of PID file on Unix

    - by bignose
    Where can I find a well-respected reference that details the proper handling of PID files on Unix? On Unix operating systems, it is common practice to “lock” a program (often a daemon) by use of a special lock file: the PID file. This is a file in a predictable location, often ‘/var/run/foo.pid’. The program is supposed to check when it starts up whether the PID file exists and, if the file does exist, exit with an error. So it's a kind of advisory, collaborative locking mechanism. The file contains a single line of text, being the numeric process ID (hence the name “PID file”) of the process that currently holds the lock; this allows an easy way to automate sending a signal to the process that holds the lock. What I can't find is a good reference on expected or “best practice” behaviour for handling PID files. There are various nuances: how to actually lock the file (don't bother? use the kernel? what about platform incompatibilities?), handling stale locks (silently delete them? when to check?), when exactly to acquire and release the lock, and so forth. Where can I find a respected, most-authoritative reference (ideally on the level of W. Richard Stevens) for this small topic?

    Read the article

  • How is jQuery so fast?

    - by ClarkeyBoy
    Hey, I have a rather large application which, on the admin frontend, takes a few seconds to load a page because of all the pageviews that it has to load into objects before displaying anything. Its a bit complex to explain how the system works, but a few of my other questions explains the system in great detail. The main difference between what they say and the current system is that the customer frontend no longer loads all the pageviews into objects when a customer first views the page - it simply adds the pageview to the database and creates an object in an unsynchronised list... to put it simply, when a customer views a page it no longer loads all the pageviews into objects; but the admin frontend still does. I have been working on some admin tools on the customer frontend recently, so if an administrator clicks the description of an item in the catalogue then the right hand column will display statistics and available actions for the selected item. To do this the page which gets loaded (through $('action-container').load(bla bla bla);) into the right hand column has to loop through ALL the pageviews - this ultimately means that ALL the pageviews are loaded into objects if they haven't been already. For some reason this loads really REALLY fast. The difference in speed is only like a second on my dev site, but the live site has thousands of pageviews so the difference is quite big... So my question is: why is it that the admin frontend loads so slowly while using $(bla).load(bla); is so fast? I mean whatever method jQuery uses, can't browsers use this method too and load pages super-fast? Obviously not as someone would've done that by now - but I am interested to know just why the difference is so big... is it just my system or is there a major difference in speed between the browser getting a page and jQuery getting a page? Do other people experience the same kind of differences? Thanks in advance, Regards, Richard

    Read the article

  • OpenCV: Shift/Align face image relative to reference Image (Image Registration)

    - by Abhischek
    I am new to OpenCV2 and working on a project in emotion recognition and would like to align a facial image in relation to a reference facial image. I would like to get the image translation working before moving to rotation. Current idea is to run a search within a limited range on both x and y coordinates and use the sum of squared differences as error metric to select the optimal x/y parameters to align the image. I'm using the OpenCV face_cascade function to detect the face images, all images are resized to a fixed (128x128). Question: Which parameters of the Mat image do I need to modify to shift the image in a positive/negative direction on both x and y axis? I believe setImageROI is no longer supported by Mat datatypes? I have the ROIs for both faces available however I am unsure how to use them. void alignImage(vector<Rect> faceROIstore, vector<Mat> faceIMGstore) { Mat refimg = faceIMGstore[1]; //reference image Mat dispimg = faceIMGstore[52]; // "displaced" version of reference image //Rect refROI = faceROIstore[1]; //Bounding box for face in reference image //Rect dispROI = faceROIstore[52]; //Bounding box for face in displaced image Mat aligned; matchTemplate(dispimg, refimg, aligned, CV_TM_SQDIFF_NORMED); imshow("Aligned image", aligned); } The idea for this approach is based on Image Alignment Tutorial by Richard Szeliski Working on Windows with OpenCV 2.4. Any suggestions are much appreciated.

    Read the article

  • is this valid json array using php

    - by Rich
    Hello, I need to convert some code done by someone else, to work in my mvc model It is using some functions like EOD that I don't understand. Does that still work in a class? Primarely, my question focusus on the json output. The old code does not use the php json_encode function, but outputs it directly like this ?> { "username": "<?php echo $_SESSION['username'];?>", "items": [ <?php echo $items;?> ] } <?php I would do it like this, but I need to be sure it's right for the items part header('Content-type: application/json'); $output = array("username"=> isset( $_SESSION['username'] ) ? $_SESSION['username'] : "?", "items"=>$items ); $this->content = json_encode($output); This is some background on how the $items is made. An item is stored like this: $_SESSION['chatHistory'][$_POST['to']] .= <<<EOD { "s": "1", "f": "{$to}", "m": "{$messagesan}" }, EOD; and it is put in the $items variable like this $items = ''; if ( !empty($_SESSION['openChatBoxes'] ) ) { foreach ( $_SESSION['openChatBoxes'] as $chatbox => $void ) { $items .= $this->chatBoxSession($chatbox); } } //The chatBoxSession() function takes an item from the $_SESSION['chatHistory'] array and returns it. I hope this was somewhat clear enough? The php manual warns that in some cases you don't get an array output, instead you get an object. So, with the EOD syntax, I am not really sure. It could save me some time if I know some things are doing what they supposed too, and giving the right output. thanks, Richard

    Read the article

  • ASP.NET 2.0 RijndaelManaged encryption algorithm vs. FIPS

    - by R Rush
    I'm running into an issue with an ASP.NET 2.0 application. Our network folks just upped our security, and now I get the floowing error whenever I try to access the app: "This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms." I've done a little research, and it sounds like ASP.NET uses the RijndaelManaged AES encryption algorithm to encrypt the ViewState of pages... and RijndaelManaged is on the list of algorithms that aren't FIPS compliant. We're certainly not explicitly calling any encryption algorithm... much less anything on the non-compliant list. This ViewState business makes sense to me, I guess. The thing I can't muddle out, though, is what to do about it. I've found a KB article that suggests using a web.config setting to specify a different algorithm... but either that didn't stick, or that algorithm isn't up to snuff, either. So: 1) Is the RijndaelManaged / ViewState thing actually the problem? Or am I barking up the wrong tree? 2) How to I specify what algorithm to use instead of RijndaelManaged? I've got a list of algorithms that are and aren't compliant; I'm just not sure where to plug that information in. Thanks! Richard

    Read the article

  • Floating point precision in Visual C++

    - by Luigi Giaccari
    HI, I am trying to use the robust predicates for computational geometry from Jonathan Richard Shewchuk. I am not a programmer, so I am not even sure of what I am saying, I may be doing some basic mistake. The point is the predicates should allow for precise aritmthetic with adaptive floating point precision. On my computer: Asus pro31/S (Core Due Centrino Processor) they do not work. The problem may stay in the fact the my computer may use some improvements in the floating point precision taht conflicts with the one used by Shewchuk. The author says: /* On some machines, the exact arithmetic routines might be defeated by the / / use of internal extended precision floating-point registers. Sometimes / / this problem can be fixed by defining certain values to be volatile, / / thus forcing them to be stored to memory and rounded off. This isn't / / a great solution, though, as it slows the arithmetic down. */ Now what I would like to know is that there is a way, maybe some compiler option, to turn off the internal extended precision floating-point registers. I really appriaciate your help

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67  | Next Page >