Search Results

Search found 859 results on 35 pages for 'nibot lab'.

Page 20/35 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Ethernet port sleeping on PS3 running linux

    - by Doug
    My lab has a PS3 running Ubuntu Linux 9.04 Server Edition. After a period of a few hours with no use, the Ethernet connection (eth0) seems to go to sleep, causing the connection to be lost. Pinging or trying to SSH into the machine results in no response. The fix I've been using is to access the machine locally and restart it (trying to bring eth0 down then up doesn't seem to correct it). I've tried setting up an hourly cron job that runs on the PS3 and pings another machine just to create network activity, but this doesn't seem to solve the problem either. Update: The solution was to run the above cron job much more frequently: every 10 minutes works.

    Read the article

  • Why some recovery tools are still able to find deleted files after I purge Recycle Bin, defrag the disk and zero-fill free space?

    - by Ivan
    As far as I understand, when I delete (without using Recycle Bin) a file, its record is removed from the file system table of contents (FAT/MFT/etc...) but the values of the disk sectors which were occupied by the file remain intact until these sectors are reused to write something else. When I use some sort of erased files recovery tool, it reads those sectors directly and tries to build up the original file. In this case, what I can't understand is why recovery tools are still able to find deleted files (with reduced chance of rebuilding them though) after I defragment the drive and overwrite all the free space with zeros. Can you explain this? I thought zero-overwritten deleted files can be only found by means of some special forensic lab magnetic scan hardware and those complex wiping algorithms (overwriting free space multiple times with random and non-random patterns) only make sense to prevent such a physical scan to succeed, but practically it seems that plain zero-fill is not enough to wipe all the tracks of deleted files. How can this be?

    Read the article

  • Remote X-windows between new RHEL5 and old Solaris 8

    - by joshxdr
    I have a very small lab network with three boxes: a modern x86-based RHEL3 box, an x86-based RHEL5 box, and a 1998-vintage SPARC Ultra5 with Solaris 8. I can use ssh -X to run a program on the RHEL5 box and view the windows on the RHEL3 box. I believe this uses xauth and magic cookies?? I have followed the X-Windows HOWTO to set up xauth on the Solaris box, but so far no dice. I would like to be able to use the X-windows server on the RHEL3 box with a client program on the Solaris box (program running on Solaris host, windows appearing at Linux host). Is there a trick to this, or have I made a mistake following the instructions for setting up xauth and magic cookie?

    Read the article

  • Can a Mac Mini Server and XCode be used for multiple students?

    - by twerdster
    I'm not an administrator but Ive been given the task of finding out whether this is possible. The scenario is like this: At our university we are offering a course in basic iPhone programming for between 4 to 8 groups of students. We have a few iPads, iPods and iPhones but only two Mac Minis. We want to enable the students to work on XCode in the lab (and from home if possible) without buying 8 Mac Minis. Is this possible to do using a Mac Mini Server? If so how would it work if 2 or more groups want to use XCode simultaneously and to debug their programs on devices simultaneously?

    Read the article

  • Dediced server for all network functions?

    - by Alan
    I want to set up a fictional network configuration for a school in my neighborhood. They have about 50 computers altogether, 2X20 in computer rooms for students and another 10 scattered around for various professors. They should all access the internet through a dedicated Linux router machine. What they would like is to have domain names for those three computer groups. Lab1, Lab2 and Professors. The computers in Lab2 and Lab1 should have static ip and should all be named by numbers. So there should be 1@Lab1, 2@Lab1.... etc. And the Professors network should have a DHCP, with authentication. Is it an ok solution to have all these functions on a single server? (The one which will be used as a router) Do I have to set a local DNS for domain naming? Do the host names for Lab computers have to be set on the clients, or can they be automatically assigned?

    Read the article

  • cisco asa query dns external

    - by Alpacino
    my lab network asa firewall below 10.10.10.20 -- ASA --- 192.168.1.10 -- website external my client 10.10.10.20 want to access website external and i create nat nat (inside,outside) static 192.168.1.10 and access list access-list outside-acl extended permit tcp any host 10.10.10.20 eq www access-list outside-acl extended permit tcp any host 10.10.10.20 eq domain access-list inside-acl extended permit tcp 10.10.10.0 255.255.255.0 any eq www access-list inside-acl extended permit tcp 10.10.10.0 255.255.255.0 any eq domain access-group outside-acl in interface outside access-group inside-acl in interface inside when i access to website with domain name it can't access but i access website with ip address it work please help me to solve problem thank you

    Read the article

  • Server OS: put it on a separate drive? Yes, no, or depends on the situation?

    - by captainentropy
    Hi, I would like opinions, or facts, both preferably, on whether it's ok to install a server's OS on the RAID array or not. I would predict installation on separate drives is the best but I'm interested in the performance. The server in question will have 8 cores (2.4GHz ea.), 24GB RAM, and ~16TB of usable space of server-class drives in RAID10. There is also a subsytem of an ~equivalent size for backup. I will be running CPU/memory intesive applications on this server in addition to it being file storage for my work (research lab). IF I install the OS (haven't decided which one, probably Ubuntu or Fedora or some other good linux distro) on separate drives will there be any performance problems if they aren't configured in RAID10? IF it is better to have the OS on separate drives should I go for 150GB velociraptors in RAID1 or smallish SSD drives in RAID1? Money is unfortunately a factor as I think I'm close to maxing my budget as it is. Thanks!

    Read the article

  • App to watch installer and roll back host later?

    - by OverTheRainbow
    I'm looking for a Windows application that can watch everything an install programs does to Windows, and can roll it back to what it was like before installing an application. InCtrl5 is useful to know what an installer did, but doesn't provide a way to return the host to its previous state. I'd like to avoid having to restore a host using eg. CloneZilla just for a small application. The goal is to make it fast to test an application in a test lab. Does someone know of an application that can do this? Edit: I wasn't specific enough: I need a way to totally remove an application but keep all other changes I made after installing the application. Edit: After checking a few of them, I settled on Cleanse Uninstaller, which was capable of removing the whole of an application, although it doesn't watch when it's installed.

    Read the article

  • VM's, virtual networks and my home network.

    - by Jason Taylor
    I want to create a small lab of VM's to test out networking with. I have two PC's running VMWare and I need the VM's on these two PC's to be connected to their own LAN. I am planning on bridging the VM's into my home network. My home network and PC's are in the 192.168.0.0 range, but I want my VM's to be in 10.1.0.0. If I do it this way (Bridging the VM's on both hosts into the network) will the VM's be able to communicate? Will my home router freak out seeing two different subnets? Is there another, easier way to connect the virtual lan's on two vmware workstations together?

    Read the article

  • Modify PATH variable for X11 during log-in

    - by user1028435
    I am working on some lab computers (read: no administrative rights) that, if I log in, I need to change the PATH variable as X11 starts. The reason is that I need to change the PATH variable at this time, as opposed to later, is that the Print Screen command seems to "bind" during login (forgive my bad explanation of this). Currently, I have a .bashrc script as a workaround: #!/bin/bash export PATH=/home/username/bin:$PATH I can make it work by starting a new X, but I was wondering if it is possible to change upon login. cat /etc/redhat-release tells me my distribution is: Red Hat Enterprise Linux Client release 5.8 (Tikanga)

    Read the article

  • Can't connect to server from certain machines

    - by Joel Coel
    On a small college campus we have a VLAN setup for the computer labs. These machines get assigned IP addresses in the 192.168.7.xxx range. In the server room, all of the server are on the default VLAN and assigned an IP address in the 10.1.1.xxx range. For the most part this works, but the lab machines are unable to connect to one of the servers. They can't even ping it. They can talk to other servers on the same switch as this server just fine. At first I thought it might be a vlan issue, but I changed the server port vlan to match other known-working ports with no effect. Any ideas?

    Read the article

  • Valid path to deploy vm on host

    - by ELSheepO
    I've recently added a VM to the library in SCVMM to use with MS Test Pro 2010, and run into a problem. I cannot import it using test pro, it just won't give me any option to import in the lab manager. Also, if I try to deploy it back to the host it came from, it gives me a error saying the path is not valid. Anyone have any insight into this? Also, SCVMM seems to freeze everytime I try to create a new VM from the template of the VM I've stored in the library, the same one thats giving me the problem when I try to deploy it on to the host. Thanks.

    Read the article

  • 64-bit Hardware Virtualization on VirtualBox

    - by Cat
    I am trying to set up a SQL Server practice lab using VirtualBox and a trial copy of Windows Server 2K8 R2 in ISO format. I received an error message that states my processor does not support 64-bit hardware virtualization. Although I "Enabled" Intel ardware virtualization in the BIOS that still doesn't work. According to the Intel website, the processor does support VT-x, however the accelerated tab in VirtualBox is greyed out and no references to VT-x are mentioned in the settings options available to me. Any ideas on how I can get around this? I checked around ServerFault and couldn't find anything but if I missed an applicable post a link is great too. Specs below - if additional information is needed please comment and I will provide. Thanks in advance. VirtualBox version - 4.1.18 Hardware - Lenovo B570 1068-A3U i3-2310M

    Read the article

  • How do I automatically download files to a Kindle 2 whenever its USB connection is plugged into a Windows 7 machine?

    - by Bob Cross
    The inspiration of this question is the Gadget Lab article How To Make Your Kindle Into an Automatic Instapaper. In that article, they describe an implementation of an Automator workflow that: Detects the connection of a portable disk (which the workflow assumes is the Kindle 2). Downloads the Instapaper mobi file containing the text version of the articles marked for "Read Later" to the computer. Downloads the mobi file from the computer to the Kindle 2. All of this is straightforward in terms of functionality but it is not immediately obvious how to do step 1 on a Windows box. Is the Task Scheduler the first step? If so, which event should be the trigger for the rest of the task?

    Read the article

  • Log off as local "administrator" user, get blank login screen

    - by Force Flow
    I have an imaged lab environment running Windows 7 and attached to a domain. The local Administrator account is enabled for certain maintenance and prep tasks. Every time I logoff from the local Administrator account, it brings me back to the standard Ctrl+Alt+Del login screen. When I press that combination, all the user controls vanish except for the accessibility button down in the left hand corner. The only way I can seem to escape from this is to tap the power button to initiate a shutdown. Windows is up-to-date, and logging off as any other user operates normally. The "hide last user" local security policy option is enabled. Has anyone seen this phenomenon before and how can I stop this from happening?

    Read the article

  • Running Ubuntu off a USB drive?

    - by Solignis
    I was wondering if a USB 2.0 Thumb drive has enough bandwidth to act as a primary system drive in an Ubuntu Linux server. More specifically an SAN server. I am running an iSCSI target, ZFS and NFS-kernel-server, BIND9 (Slave), and Openldap (Slave). I was thinking of resorting to a thumb drive because my new motherboard only has 4 SATA ports and I have 5 disks. 4 (ZFS Pool) 1 (System). And unless I get an expansion card there is no way to get more SATA ports. This "server" leans more twords a home server. I use in my lab with my VMware server. It provides storage, or atleast it did until it died. Would it still be better to go with the SATA hard disk?

    Read the article

  • Undetected Virus? I study at College, and Now all of the school computers have paint.exe -autocheck

    - by Jeffy
    "C:\WINDOWS\system32\Paint.exe" -autocheck is added to the registry every time its removed. This is like global. All the lab PCs(more than a hundred), personal laptops have this file. I really have no expert help to turn to.. as jotti says this file is clean. Here's the dropped file [removed] It seems that we all had this game cheating tool on our PCs called "Garena Maphack". Everytime it was run it would drop paint.exe into the system dir. Paint.exe is diguised as the real paint.exe from windows. Having the same icon and such. Check out threat expert's report at threatexpert.com/report.aspx?md5=176288f6f22a80c76329853f8535d45b The game cheat that started this huge mess can be obtained from [removed] What do I do? any experts care to take apart this file?

    Read the article

  • Lightweight use of Enterprise Library TraceListeners

    - by gWiz
    Is it possible to use the Enterprise Library 4.1 TraceListeners without using the entire Enterprise Library Logging AB? I'd prefer to simply use .NET Diagnostics Tracing, but would like to setup a listener that sends emails on Error events. I figured I could use the Enterprise Library EmailTraceListener. However, my initial attempts to configure it have failed. Here's what I hoped would work: <system.diagnostics> <trace autoflush="false" /> <sources> <source name="SampleSource" switchValue="Verbose" > <listeners> <add name="textFileListener" /> <add name="emailListener" /> </listeners> </source> </sources> <sharedListeners> <add name="textFileListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="..\trace.log" traceOutputOptions="DateTime"> <filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose" /> </add> <add name="emailListener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.EmailTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging" toAddress="[email protected]" fromAddress="[email protected]" smtpServer="mail.example.com" > <filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose" /> </add> </sharedListeners> </system.diagnostics> However I get [ArgumentException: The parameter 'address' cannot be an empty string. Parameter name: address] System.Net.Mail.MailAddress..ctor(String address, String displayName, Encoding displayNameEncoding) +1098157 System.Net.Mail.MailAddress..ctor(String address) +8 Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.EmailMessage.CreateMailMessage() +256 Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.EmailMessage.Send() +39 Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.EmailTraceListener.Write(String message) +96 System.Diagnostics.TraceListener.WriteHeader(String source, TraceEventType eventType, Int32 id) +184 System.Diagnostics.TraceListener.TraceEvent(TraceEventCache eventCache, String source, TraceEventType eventType, Int32 id, String format, Object[] args) +63 System.Diagnostics.TraceSource.TraceEvent(TraceEventType eventType, Int32 id, String format, Object[] args) +198 System.Diagnostics.TraceSource.TraceInformation(String message) +14 Which leads me to believe the .NET Tracing code does not care about the "non-standard" config attributes I've supplied for emailListener. I also tried adding the appropriate LAB configSection declaration and: <loggingConfiguration> <listeners> <add toAddress="[email protected]" fromAddress="[email protected]" smtpServer="mail.example.com" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.EmailTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging" name="emailListener" /> </listeners> </loggingConfiguration> This also results in the same exception. I figure it's possible to programmatically configure the EmailTraceListener, but I prefer this to be config-driven. I also understand I can implement my own derivative of TraceListener. So, is it possible to use the Ent Lib TraceListeners, without using the whole Ent Lib LAB, and configure them from the config file? Update: After examining the code, I have discovered it is not possible. The Ent Lib TraceListeners do not actually utilize the config attributes they specify in overriding TraceListener.GetSupportedAttributes(), despite the recommendations in the .NET TraceListener documentation. Bug filed.

    Read the article

  • C++/msvc6 application crashes due to heap corruption, any hints?

    - by David Alfonso
    Hello all, let me say first that I'm writing this question after months of trying to find out the root of a crash happening in our application. I'll try to detail as much as possible what I've already found out about it. About the application It runs on Windows XP Professional SP2. It's built with Microsoft Visual C++ 6.0 with Service Pack 6. It's MFC based. It uses several external dlls (e.g. Xerces, ZLib or ACE). It has high performance requirements. It does a lot of network and hard disk I/O, but it's also cpu intensive. It has an exception handling mechanism which generates a minidump when an unhandled exception occurs. Facts about the crash It only happens on multiprocessor/multicore machines and under heavy loads of work. It happens at random (neither we nor our client have found a pattern yet). We cannot reproduce the crash on our testing lab. It only happens on some production systems (but always in multicore machines) It always ends up crashing at the same point, although the complete stack is not always the same. Let me add the stack of the crashing thread (obtained using WinDbg, sorry we don't have symbols) ChildEBP RetAddr Args to Child WARNING: Stack unwind information not available. Following frames may be wrong. 030af6c8 7c9206eb 77bfc3c9 01a80000 00224bc3 MyApplication+0x2a85b9 030af960 7c91e9c0 7c92901b 00000ab4 00000000 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030af98c 7c9205c8 00000001 00000000 00000000 ntdll!ZwWaitForSingleObject+0xc (FPO: [3,0,0]) 030af9c0 7c920551 01a80898 7c92056d 313adfb0 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030afa8c 4ba3ae96 000307da 00130005 00040012 ntdll!RtlFreeHeap+0x1e9 (FPO: [Non-Fpo]) 030afacc 77bfc2e3 0214e384 3087c8d8 02151030 0x4ba3ae96 030afb00 7c91e306 7c80bfc1 00000948 00000001 msvcrt!free+0xc8 (FPO: [Non-Fpo]) 030afb20 0042965b 030afcc0 0214d780 02151218 ntdll!ZwReleaseSemaphore+0xc (FPO: [3,0,0]) 030afb7c 7c9206eb 02e6c471 02ea0000 00000008 MyApplication+0x2965b 030afe60 7c9205c8 02151248 030aff38 7c920551 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030afe74 7c92056d 0210bfb8 02151250 02151250 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030aff38 77bfc2de 01a80000 00000000 77bfc2e3 ntdll!RtlFreeHeap+0x647 (FPO: [Non-Fpo]) 7c92056d c5ffffff ce7c94be ff7c94be 00ffffff msvcrt!free+0xc3 (FPO: [Non-Fpo]) 7c920575 ff7c94be 00ffffff 12000000 907c94be 0xc5ffffff 7c920579 00ffffff 12000000 907c94be 90909090 0xff7c94be *** WARNING: Unable to verify checksum for xerces-c_2_7.dll *** ERROR: Symbol file could not be found. Defaulted to export symbols for xerces-c_2_7.dll - 7c92057d 12000000 907c94be 90909090 8b55ff8b MyApplication+0xbfffff 7c920581 907c94be 90909090 8b55ff8b 08458bec xerces_c_2_7 7c920585 90909090 8b55ff8b 08458bec 04408b66 0x907c94be 7c920589 8b55ff8b 08458bec 04408b66 0004c25d 0x90909090 7c92058d 08458bec 04408b66 0004c25d 90909090 0x8b55ff8b The address MyApplication+0x2a85b9 corresponds to a call to erase() of a std::list. What I have tried so far Reviewing all the code related to the point where the crash ends happening. Trying to enable pageheap on our testing lab though nothing useful has been found by now. We have substituted the std::list for a C array and then it crashes in other part of the code (although it is related code, it's not in the code where the old list resided). Coincidentally, now it crashes in another erase, though this time of a std::multiset. Let me copy the stack contained in the dump: ntdll.dll!_RtlpCoalesceFreeBlocks@16() + 0x124e bytes ntdll.dll!_RtlFreeHeap@12() + 0x91f bytes msvcrt.dll!_free() + 0xc3 bytes MyApplication.exe!006a4fda() [Frames below may be incorrect and/or missing, no symbols loaded for MyApplication.exe] MyApplication.exe!0069f305() ntdll.dll!_NtFreeVirtualMemory@16() + 0xc bytes ntdll.dll!_RtlpSecMemFreeVirtualMemory@16() + 0x1b bytes ntdll.dll!_ZwWaitForSingleObject@12() + 0xc bytes ntdll.dll!_RtlpFreeToHeapLookaside@8() + 0x26 bytes ntdll.dll!_RtlFreeHeap@12() + 0x114 bytes msvcrt.dll!_free() + 0xc3 bytes c5ffffff() Possible solutions (that I'm aware of) which cannot be applied "Migrate the application to a newer compiler": We are working on this but It's not a solution at the moment. "Enable pageheap (normal or full)": We can't enable pageheap on production machines as this affects performance heavily. I think that's all I remember now, if I have forgotten something I'll add it asap. If you can give me some hint or propose some possible solution, don't hesitate to answer! Thank you in advance for your time and advice.

    Read the article

  • Javascript parent and child window functions

    - by Mike Thornley
    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>Lab 9-3</TITLE> <SCRIPT LANGUAGE="JavaScript"> <!-- function myFunction(){ myWin = open("","","width=200,height=200"); with(myWin.document){ open(); write("<HTML><HEAD><TITLE>Child Window</TITLE>"); write("<SCRIPT>function myTest(){"); write("alert('This function is defined in the child window "); write("and is called from the parent window.'); this.focus();}"); write("</SCRIPT></HEAD><BODY><H3>Child Window</H3><HR>"); write("<FORM><INPUT TYPE='button' VALUE='parent window function' "); // Use opener property write("onClick='opener.winFunction();'>"); write("<P><INPUT TYPE='button' VALUE='close window' "); write("onClick='window.close();'>"); write("</FORM></BODY></HTML>"); close(); } } function winFunction(){ alert("This function is defined in the parent window\n" + "and is called from the child window."); myWin.focus(); } //--> </SCRIPT> </HEAD> <BODY> <H3>CIW Web Languages</H3> <HR> <FORM NAME="myForm"> <INPUT TYPE="button" VALUE="open new window" onClick="myFunction();"> <!-- Invoke child window function --> <input type="button" value="Click to open child window" onclick="javascript:void(myWin.myTest());"/> </FORM> <P> </BODY> </HTML> To explain further what my initial query was, the code above, should open the child window (myWin) with the second button, the 'Open child window' button without the need to open the window with the first button or do anything else. It should simply call the myWin.myTest()function The child window will open when the second button is pressed but needs to have the child window open first (first button push) before it'll work. This is not the intended purpose, the 'Open child window' button should work without anything else needing to be done. For some reason the parent window isn't communicating with the myWin window and myTest fucntion. It's not homework, it's part of a certification course lab and is coded in the manner I have been shown to understand as correct. DTD isn't included as the focus is the JavaScript. I code correctly with regards to that and other W3C requirements.

    Read the article

  • MySQL Connect: What to Expect From the Wondrous Land of MySQL Cluster

    - by Mat Keep
    The MySQL Connect conference is only a couple of weeks away, with MySQL engineers, support teams, consultants and community aces busy putting the final touches to their talks. There will be many exciting new announcements and sharing of best practices at the conference, covering the range of MySQL technologies. MySQL Cluster will a big part of this, so I wanted to share some key sessions for those of you who plan on attending, as well as some resources for those who are not lucky enough to be able to make the trip, but who can't afford to miss the key news. Of course, this is no substitute to actually being there….and the good news is that registration is still open ;-) Roadmap: Whats New in MySQL Cluster Saturday 29th, 1300-1400, in Golden Gate room 5.                                                                                        Bernd Ocklin, director of MySQL Cluster development, and myself will be taking a look at what follows the latest MySQL Cluster 7.2 release. I don't want to give to much away - lets just say its not often you can add powerful new functionality to a product while at the same time making life radically simpler for its users. For those not making it to the Conference, a live webinar repeating the talk is scheduled for Thursday 25th October at 09.00 pacific time. Hold the date, registration will be open for that soon and published to our MySQL Webinars page Best Practices Getting Started with MySQL Cluster, Hands-On Lab Saturday 29th, 1600-1700, in Plaza Room A.                                                              Santo Leto, one of our lead MySQL Cluster support engineers, regularly works with users new to MySQL Cluster, assisting them in installation, configuration, scaling, etc. In this lab, Santo will share best-practices in getting started. Delivering Breakthrough Performance with MySQL Cluster Saturday 29th, 1730-1830, in Golden Gate room 5. Frazer Clement, lead MySQL Cluster software engineer, will demonstrate how to translate the awesome Cluster benchmarks (remember 1 BILLION UPDATEs per minute ?!) into real-world performance. You can also get some best practices from our new MySQL Cluster performance guide  MySQL Cluster BoF Saturday 29th, 1900-2000, room Golden Gate 5.                                                                                                           Come and get a demonstration of new tools for the installation and configuration of MySQL Cluster, and spend time with the engineering team discussing any questions or issues you may have. Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster Sunday 30th, 1145 - 1245, in Golden Gate room 7.   In this session, JD Duncan and Andrew Morgan will present how to get started with both Memcached and new NoSQL APIs. JD and I recently ran a webinar demonstrating how to build simple Twitter-like services with Memcached and MySQL Cluster. The replay is available for download.  Case Studies: MySQL Cluster @ El Chavo, Latin America’s #1 Facebook Game Sunday 30th, 1745 - 1845, in Golden Gate room 4.                             Playful Play deployed MySQL Cluster CGE to power their market leading social game. This session will discuss the challenges they faced, why they selected MySQL Cluster and their experiences to date. You can read more about Playful Play and MySQL Cluster here  A Journey into NoSQLand: MySQL’s NoSQL Implementation Sunday 30th, 1345 - 1445, in Golden Gate room 4.                                          Lig Turmelle, web DBA at Kaplan Professional and esteemed Oracle Ace, will discuss her experiences working with the NoSQL interfaces for both MySQL Cluster and InnoDB Evaluating MySQL HA Alternatives Saturday 29th, 1430-1530, room Golden Gate 5                                                                                   Henrik Ingo, former member of the MySQL sales engineering team, will provide an overview of various HA technologies for MySQL, starting with replication, progressing to InnoDB, Galera and MySQL Cluster What about the other stuff? Of course MySQL Connect has much, much more than MySQL Cluster. There will be lots on replication (which I'll blog about soon), MySQL 5.6, InnoDB, cloud, etc, etc. Take a look at the full Content Catalog to see more. If you are attending, I hope to see you at one of the Cluster sessions...and remember, registration is still open

    Read the article

  • DirectCompute

    In my previous blog post I introduced the concept of GPGPU ending with:On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the Direct X API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as "kernel".In this post I want to share some links to get you started with DirectCompute and HLSL.1. Watch the recording of the PDC 09 session: DirectX11 DirectCompute.2. If session recordings is your thing there are two more on DirectCompute from nvidia's GTC09 conference 1015 (pdf, mp4) and 1411 (mp4 plus the presenter's written version of the session).3. Over at gamedev there is an old Compute Shader tutorial. At the same site, there is a 3-part blog post on Compute Shader: Introduction, Resources and Addressing.4. From PDC, you can also download the DirectCompute Hands On Lab.5. When you are ready to get your hands even dirtier, download the latest Windows DirectX SDK (at the time of writing the latest is dated Feb 2010).6. Within the SDK you'll find a Compute Shader Overview and samples such as: Basic, Sort, OIT, NBodyGravity, HDR Tone Mapping.7. Talking of DX11/DirectCompute samples, there are also a couple of good ones on this URL.8. The documentation of the various APIs is available online. Here are just some good (but far from complete) taster entry points into that: numthreads, SV_DispatchThreadID, SV_GroupThreadID, SV_GroupID, SV_GroupIndex, D3D11CreateDevice, D3DX11CompileFromFile, CreateComputeShader, Dispatch, D3D11_BIND_FLAG, GSSetShader. Comments about this post welcome at the original blog.

    Read the article

  • OTN Developer Days (Review) - San Juan, PR - April 29, 2010

    - by dana.singleterry
    A quick update on the San Juan, PR event. First off it was a great success with the Keynote audience of 200+. Mickey Ralat, Managing Director Oracle Caribbean, kicked off the event with a quick introduction followed by me delivering the Keynote Message - The Fusion Development Platform which is the first session in the regular OTN DD events that we run in North America. Following this session was a partner, SDT, basically marketing their services which covers the Oracle stack and then following was a very brief presentation on APEX. After this we broke out into the various tracks of Java, (APEX) DB SQL Developer, .NET on Oracle. After the breakout we ran the following sessions in the Java track: Developing with JDBC, UCP, and Java in Database, Rich Internet Applications in Web 2.0, Development Made Simple Without Coding: Developing Reusable Business Components. As expected with the various tracks, we ended up with 50 - 70 in the various sessions within the JAVA track and the audience was very impressed with the power of JDeveloper/ADF 11g and we got a number of questions from licensing cost to upgrading / integrating from Forms. As for the Forms questions, I fielded a number of them and for those I couldn't, I pointed them towards Grants resources which seemed to suffice. They were all, for the most part, unaware of the recent 11.1.1.3 release which occurred only a couple of days prior to the event. The indication was that they were going to download it and use it for the lab that was included on the DVD which we did not have the time for them to even start on. For those of you that attended the event, you can download the updated presentations as follows: Keynote - The Fusion Development Platform Rich Internet Applications in Web 2.0 Development Made Simple Without Coding - Developing Reusable Business Components

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • Now Available: Visual Studio 2010 Release Candidate Virtual Machines with Sample Data and Hands-on-L

    - by John Alexander
    From a message from Brian Keller: “Back in December we posted a set of virtual machines pre-configured with Visual Studio 2010 Beta 2, Visual Studio Team Foundation Server 2010 Beta 2, and 7 hands-on-labs. I am pleased to announce that today we have shipped an updated virtual machine using the Visual Studio 2010 Release Candidate bits, a brand new sample application, and 9 hands-on-labs. This VM is customer-ready and includes everything you need to learn and/or deliver demonstrations of many of my favorite application lifecycle management (ALM) capabilities in Visual Studio 2010. This VM is available in the virtualization platform of your choice (Hyper-V, Virtual PC 2007 SP1, and Windows [7] Virtual PC). Hyper-V is highly recommended because of the performance benefits and snapshotting capabilities. Tailspin Toys The sample application we are using in this virtual machine is a simple ASP.NET MVC 2 storefront called Tailspin Toys. Tailspin Toys sells model airplanes and relies on the application lifecycle management capabilities of Visual Studio 2010 to help them build, test, and maintain their storefront. Major kudos go to Dan Massey for building out this great application for us. Hands-on-Labs / Demo Scripts The 9 hands-on-labs / demo scripts which accompany this virtual machine cover several of the core capabilities of conducting application lifecycle management with Visual Studio 2010. Each document can be used by an individual in a hands-on-lab capacity, to learn how to perform a given set of tasks, or used by a presenter to deliver a demonstration or classroom-style training. Unlike the beta 2 release, 100% of these labs target Tailspin Toys to help ensure a consistent storytelling experience. Software quality: Authoring and Running Manual Tests using Microsoft Test Manager 2010 Introduction to Test Case Management with Microsoft Test Manager 2010 Introduction to Coded UI Tests with Visual Studio 2010 Ultimate Debugging with IntelliTrace using Visual Studio 2010 Ultimate Software architecture: Code Discovery using the architecture tools in Visual Studio 2010 Ultimate Understanding Class Coupling with Visual Studio 2010 Ultimate Using the Architecture Explore in Visual Studio 2010 Ultimate to Analyze Your Code Software Configuration Management: Planning your Projects with Team Foundation Server 2010 Branching and Merging Visualization with Team Foundation Server 2010 “ Check out Brian’s Post for more info including download instructions…

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >