Search Results

Search found 21907 results on 877 pages for 'virtual box'.

Page 228/877 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • Graphic demonstrating emphasis of front end in web apps

    - by sohail
    I remember stumbling across an amusing graphic a year or so ago which demonstrated the tiers of web development. The back-end was shown as a tiny box, but the front end was shown as a huge box crammed with lots of front-end technologies like AJAX, DHTML. This is all a vague recollection. Does anyone know where on the Intraweb this graphic might be? It was probably on a programming cartoon site, but I only view XKCD on a regular basis, and I couldn't find it on there. Although tagged as fun, my request does have a productive edge to it - it would be quite useful in driving home to my colleagues how UI top-heavy web application development has become.

    Read the article

  • So, BizTalk 2010 Beta is out &hellip; wait, no it&rsquo;s not &hellip; wait

    - by Enrique Lima
    Over the last couple of days we have seen posts and “rumors” of the Beta availability.  There was a link to the bits from the Download Center, but then they were not. Documentation for it is available now at: BizTalk Server 2010 Documentation – Beta Microsoft BizTalk Server 2010 ESB Toolkit Documentation – Beta BizTalk RFID Server 2010 and BizTalk RFID Mobile 2010 Documentation – Beta But what about the bits?!? From the Biztalk Server Team blog: “We will be announcing the public Beta of BizTalk Server 2010 at the Application Infrastructure Virtual Launch tomorrow (Thursday, May 20th, 2010 at 8:30 AM PST) with planned RTM in Q3 of 2010. BizTalk Server 2010 aligns with the latest Microsoft platform releases, including SQL Server 2008 R2, Visual Studio 2010 and SharePoint 2010, and will integrate with Windows Server AppFabric and with .NET 4. At this virtual launch event we will disclose details on new features and capabilities in BizTalk Server 2010 though presentations, whitepapers, videos and recorded demos. Please join us tomorrow for an exciting launch! The BizTalk Team” Keep your eyes and ears at the ready.

    Read the article

  • How to control fan speed and temperatures on Asus A8Js laptop running Ubuntu Server?

    - by Azeworai
    Hi, I have tried installing asusfan and lm-sensors but I'm unable to control my fans to cool my laptop down sufficiently. Currently it overheats at about 100 degrees celsius and my sensors output somehow does not have any fan information on it: jackson@OLYMPIA:~$ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +69.0°C (crit = +110.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +66.0°C (high = +100.0°C, crit = +100.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +66.0°C (high = +100.0°C, crit = +100.0°C) I have checked my bios and there isn't any fan settings there. I can consistently overheat just by converting a video via Handbrake. I have ubuntu-desktop installed for a GUI. Is there a way for me to control my fans to start spinning before it reaches a critical temperature and kills itself?

    Read the article

  • Can't get vnc to connect

    - by Thom
    I have a server and my laptop. I want to be able to start vnc server on the server and then connect from my laptop. Both are running ubuntu 11.10 64 bit desktop On my server, i installed x11vnc. I set it up with a password, no view only password. I ssh to the box and typed vncserver :42 Now on my laptop, I installed gtkvncviewer and ran it. It popped up a box. I entered the picard:42 (the name of the server in my /etc/hosts file) and the password. I tried with and without the user. It always disconnects immediately. Can anyone point out what I'm doing wrong? Is it because I'm not running a GUI session currently on picard? If so, how can I start the Xwindows session remotely to connect with vncserver?

    Read the article

  • How to control fan speed and temperatures on Asus A8Js laptop?

    - by Azeworai
    Hi, I have tried installing asusfan and lm-sensors but I'm unable to control my fans to cool my laptop down sufficiently. Currently it overheats at about 100 degrees celsius and my sensors output somehow does not have any fan information on it: jackson@OLYMPIA:~$ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +69.0°C (crit = +110.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +66.0°C (high = +100.0°C, crit = +100.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +66.0°C (high = +100.0°C, crit = +100.0°C) I have checked my bios and there isn't any fan settings there. I can consistently overheat just by converting a video via Handbrake. I have ubuntu-desktop installed for a GUI. Is there a way for me to control my fans to start spinning before it reaches a critical temperature and kills itself?

    Read the article

  • Microsoft Azure News: Capturing VM Images

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2014/05/21/microsoft-azure-news-capturing-vm-images.aspxIf you have a Virtual Machine (VM) in Microsoft Azure that has a specific configuration, it used to be difficult to clone that VM. You had to sysprep the VM, and clone the data disks. This was slow, prone to errors, and stopped you from being productive. No more! A new option, called Capture, allows you to easily select a VM, running or not. The capture will copy the OS disk and data disks and create a new image out of it automatically for you. This means you can now easily clone an entire VM without affecting productivity.  To capture a VM, simply browse to your Virtual Machines in the Microsoft Azure management website, select the VM you want to clone, and click on the Capture button at the bottom. A window will come up asking to name your image. It took less than 1 minute for me to build a clone of my server. And because it is stored as an image, I can easily create a new VM with it. So that’s what I did… And that took about 5 minutes total.  That’s amazing…  To create a new VM from your image, click on the NEW icon (bottom left), select Compute/Virtual Machine/From Gallery, and select My Images from the left menu when selecting an Image. You will find your newly created image. Because this is a clone, you will not be prompted for a new login; the user id/password is the same. About Herve Roggero Herve Roggero, Microsoft Azure MVP, @hroggero, is the founder of Blue Syntax Consulting (http://www.bluesyntaxconsulting.com). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • How to Add a Business Card Image to a Signature in Outlook 2013 Without the vCard (.vcf) File

    - by Lori Kaufman
    When you add a business card to a signature, an image of the business card is inserted into the signature and the vCard (.vcf) file is attached. If you don’t want to attach the vCard file, you can insert the image only into your signature. To insert only the image of your business card without the .vcf file, click People on the Navigation Bar at the bottom of the Outlook window. To get a business card image we can use, we must view the contacts in any form other than People, so we can open the full contact editing window. To do this, click on a different view in the Current View section of the Home tab. We chose to view our contacts in the Business Card format. Double-click on your contact in the current view. The full contact editing window displays with an image of the business card on the right. Right-click on the business card image and select Copy Image from the popup menu. To close the contact editing window, click the File tab and click Close in the menu list on the left. NOTE: You can also click the X in the upper, right corner of the contact editing window to close it. To open the signature editor, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. NOTE: You can also access the Signatures and Stationery dialog box from the Message window for new emails and drafts. Click New Email on the Home tab or double-click an email in the Drafts folder to access the Message window. For more information, see our article about assigning a default signature. In the signature editor, right-click and select Paste from the popup menu. The image is inserted into the signature. You can also use this method to copy a business card image for use in other documents and programs. It’s also possible to insert the vCard (.vcf) file into a signature without the image. We’ll cover that topic tomorrow.     

    Read the article

  • Upgraded Ubuntu 12.04 -> 12.10 and Drupal 7 site now get error

    - by Paul B
    I do all my Drupal 7 webdev and today I took advantage to upgrade my local WebDev box O/S to Ubuntu 12.10 from 12.04 and now I get the following errors for all my D7 projects on my localhost WebDev box (Ubuntu 12.10) It was all fine pre Ubuntu 12.04: Error The website encountered an unexpected error. Please try again later. Error message PDOException: SQLSTATE[42000]: Syntax error or access violation: 1286 Unknown storage engine 'InnoDB': SELECT expire, value FROM {semaphore} WHERE name = :name; Array ( [:name] => variable_init ) in lock_may_be_available() (line 167 of /var/www/jobsdaily/includes/lock.inc). A quick research and look into the phpmyadmin (3.4.11.1) and it seems InnoDB is an issue and when I click on a table to see data I get #1286 - Unknown storage engine 'InnoDB'. I have all my D7 sql backed up, but don't really want to go down the whole 'import' route, since it's 10 months work! Anyone had this issues and can anyone suggest fix ideas? Thanks

    Read the article

  • VMMap - awesome memory analysis tool

    VMMap is a process virtual and physical memory analysis utility. It shows a breakdown of a process's committed virtual memory types as well as the amount of physical memory (working set) assigned by the operating system to those types. Besides graphical representations of memory usage, VMMap also shows summary information and a detailed process memory map. Powerful filtering and refresh capabilities allow you to identify the sources of process memory usage and the memory cost of application features. Besides flexible views for analyzing live processes, VMMap supports the export of data in multiple forms, including a native format that preserves all the information so that you can load back in. It also includes command-line options that enable scripting scenarios. VMMap is the ideal tool for developers wanting to understand and optimize their application's memory resource usage. span.fullpost {display:none;}

    Read the article

  • How to remove permanently those error prompts while using openbox gnome?

    - by YumYumYum
    How to remove permanently those error prompts while using openbox gnome? Such as: update ubuntu latest release, this kind of errors etc, how to permanently disable them so that it does not show in front of my presentation. Follow up: a) How do i kill that screen shot error box? $ cat /etc/default/whoopsie [General] report_crashes=false $ apport-cli *** Send problem report to the developers? After the problem report has been sent, please fill out the form in the automatically opened web browser. What would you like to do? Your options are: S: Send report (69.7 KB) V: View report K: Keep report file for sending later or copying to somewhere else I: Cancel and ignore future crashes of this program version C: Cancel Please choose (S/V/K/I/C): I b) How do i kill that update notification dialog box? $/etc/xdg/autostart# vim update-notifier.desktop #NoDisplay=true NoDisplay=false :wq $ cat update-notifier.desktop | grep NoDisplay NoDisplay=false No more disturbing popup's now.

    Read the article

  • Unable to mount /dev/loop0 during install

    - by AJP
    I was installing 32-bit Ubuntu(ubuntu-10.10-desktop-i386.iso) on VMWare workstation 7.1. During installation an error came up with the following text. (initramfs) mount: mounting dev/loop0 on //filesystem.squashfs failed: Input/Output error Can not mount /dev/loop0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs I did a memory test which was successful, but when selecting "Try Ubuntu without installing", "Install Ubuntu" or "Check disk for defects" the same error is showing up. I download the ISO image from Ubuntu website "http://www.ubuntu.com/desktop/get-ubuntu/download". As I couldn't find the checksum data, the ISO image was verified by mounting to a virtual drive and browsing the contents. The ISO image is mounted to a virtual drive in VMWare and not burnt to a CD.

    Read the article

  • VMMap - awesome memory analysis tool

    VMMap is a process virtual and physical memory analysis utility. It shows a breakdown of a process's committed virtual memory types as well as the amount of physical memory (working set) assigned by the operating system to those types. Besides graphical representations of memory usage, VMMap also shows summary information and a detailed process memory map. Powerful filtering and refresh capabilities allow you to identify the sources of process memory usage and the memory cost of application features. Besides flexible views for analyzing live processes, VMMap supports the export of data in multiple forms, including a native format that preserves all the information so that you can load back in. It also includes command-line options that enable scripting scenarios. VMMap is the ideal tool for developers wanting to understand and optimize their application's memory resource usage. span.fullpost {display:none;}

    Read the article

  • How do I install pgAdmin III for postgreSQL 9.2?

    - by Vector
    I have a Windows server that runs postgresql 9.2. I want to hit it using pgAdmin III from my Ubuntu 12.10 workstation box. I installed pgAdmin III from synaptic and also tried direct download from postgreSQL site using software installer. Regardless, I can get only get pgAdmin III for postgresql 9.1. When I run pgAdmin III and point to my server I get an error message telling me that the database is 9.2 and my pgAdmin III is for 9.1, isn't compatible with 9.2. I can access the server itself fine OK from the Ubuntu box - I have Python programs that hit the database with no problems - but I need pgAdmin III for 9.2 running under Ubuntu 12.10. Is it available? Where do I get it?

    Read the article

  • Bullet Physics implementing custom MotionState class

    - by Arosboro
    I'm trying to make my engine's camera a kinematic rigid body that can collide into other rigid bodies. I've overridden the btMotionState class and implemented setKinematicPos which updates the motion state's tranform. I use the overridden class when creating my kinematic body, but the collision detection fails. I'm doing this for fun trying to add collision detection and physics to Sean O' Neil's Procedural Universe I referred to the bullet wiki on MotionStates for my CPhysicsMotionState class. If it helps I can add the code for the Planetary rigid bodies, but I didn't want to clutter the post. Here is my motion state class: class CPhysicsMotionState: public btMotionState { protected: // This is the transform with position and rotation of the camera CSRTTransform* m_srtTransform; btTransform m_btPos1; public: CPhysicsMotionState(const btTransform &initialpos, CSRTTransform* srtTransform) { m_srtTransform = srtTransform; m_btPos1 = initialpos; } virtual ~CPhysicsMotionState() { // TODO Auto-generated destructor stub } virtual void getWorldTransform(btTransform &worldTrans) const { worldTrans = m_btPos1; } void setKinematicPos(btQuaternion &rot, btVector3 &pos) { m_btPos1.setRotation(rot); m_btPos1.setOrigin(pos); } virtual void setWorldTransform(const btTransform &worldTrans) { btQuaternion rot = worldTrans.getRotation(); btVector3 pos = worldTrans.getOrigin(); m_srtTransform->m_qRotate = CQuaternion(rot.x(), rot.y(), rot.z(), rot.w()); m_srtTransform->SetPosition(CVector(pos.x(), pos.y(), pos.z())); m_btPos1 = worldTrans; } }; I add a rigid body for the camera: // Create rigid body for camera btCollisionShape* cameraShape = new btSphereShape(btScalar(5.0f)); btTransform startTransform; startTransform.setIdentity(); // forgot to add this line CVector vCamera = m_srtCamera.GetPosition(); startTransform.setOrigin(btVector3(vCamera.x, vCamera.y, vCamera.z)); m_msCamera = new CPhysicsMotionState(startTransform, &m_srtCamera); btScalar tMass(80.7f); bool isDynamic = (tMass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) cameraShape->calculateLocalInertia(tMass,localInertia); btRigidBody::btRigidBodyConstructionInfo rbInfo(tMass, m_msCamera, cameraShape, localInertia); m_rigidBody = new btRigidBody(rbInfo); m_rigidBody->setCollisionFlags(m_rigidBody->getCollisionFlags() | btCollisionObject::CF_KINEMATIC_OBJECT); m_rigidBody->setActivationState(DISABLE_DEACTIVATION); This is the code in Update() that runs each frame: CSRTTransform srtCamera = CCameraTask::GetPtr()->GetCamera(); Quaternion qRotate = srtCamera.m_qRotate; btQuaternion rot = btQuaternion(qRotate.x, qRotate.y, qRotate.z, qRotate.w); CVector vCamera = CCameraTask::GetPtr()->GetPosition(); btVector3 pos = btVector3(vCamera.x, vCamera.y, vCamera.z); CPhysicsMotionState* cameraMotionState = CCameraTask::GetPtr()->GetMotionState(); cameraMotionState->setKinematicPos(rot, pos);

    Read the article

  • Does the Ubuntu mini.iso work with EFI?

    - by jean388
    I have to install Ubuntu 11.10 from the AMD64 mini.iso on a system with UEFI BIOS motherboard. I have configured a virtual machine in VirtualBox for making a test install before I setup the real system. In VirtualBox I have enabled EFI. When the virtual machine is powered on and boot mini.iso the Grub commandline is shown. If I try to boot the normal Ubuntu CD it works fine and I get the normal boot options "Install Ubuntu" etc. Does Ubuntu mini.iso not work with EFI?

    Read the article

  • How to control fan speed and temperatures on Asus A8Js laptop?

    - by Azeworai
    I have tried installing asusfan and lm-sensors but I'm unable to control my fans to cool my laptop down sufficiently. Currently it overheats at about 100 degrees celsius and my sensors output somehow does not have any fan information on it: jackson@OLYMPIA:~$ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +69.0°C (crit = +110.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +66.0°C (high = +100.0°C, crit = +100.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +66.0°C (high = +100.0°C, crit = +100.0°C) I have checked my bios and there isn't any fan settings there. I can consistently overheat just by converting a video via Handbrake. I have ubuntu-desktop installed for a GUI. Is there a way for me to control my fans to start spinning before it reaches a critical temperature and kills itself?

    Read the article

  • What is upcasting/downcasting?

    - by acidzombie24
    When learning about polymorphism you commonly see something like this class Base { int prv_member; virtual void fn(){} } class Derived : Base { int more_data; virtual void fn(){} } What is upcasting or downcasting? Is (Derived*)base_ptr; an upcast or downcast? I call it upcast because you are going away from the base into something more specific. Other people told me it is a downcast because you are going down a hierarchy into something specific with the top being the root. But other people seem to call it what i call it. When converting a base ptr to a derived ptr is it called upcasting or downcasting? and if someone can link to an official source or explain why its called that than great.

    Read the article

  • Problem Trying to Install ROOT (by CERN) on Ubuntu 11.04 i386

    - by Jose Luis
    I hope you can help me with this problem I am trying to install root in my computer, but I have a problem and I don't know what to do to solve it I've downloaded the tar file with the root version that I want to install I've extracted the files in the tar file I've run the configure program succesfully, but when I run "make" command I get this result: cp /root/root/core/utils/src/RClStl.cxx core/utils/src/RClStl_tmp.cxx bin/rmkdepend -R -fcore/utils/src/RClStl_tmp.d -Y -w 1000 -- -pipe -m32 -Wall -W -Woverloaded-virtual -fPIC -Iinclude -DR__HAVE_CONFIG -pthread -UR__HAVE_CONFIG -DROOTBUILD -I/root/root/core/utils/src -D__cplusplus -- core/utils/src/RClStl_tmp.cxx g++ -O2 -pipe -m32 -Wall -W -Woverloaded-virtual -fPIC -Iinclude -DR__HAVE_CONFIG -pthread -UR__HAVE_CONFIG -DROOTBUILD -I/root/root/core/utils/src -o core/utils/src/RClStl_tmp.o -c core/utils/src/RClStl_tmp.cxx In file included from core/utils/src/RClStl.h:28:0, from core/utils/src/RClStl_tmp.cxx:16: core/utils/src/Scanner.h:16:27: fatal error: clang/AST/AST.h: No existe el fichero o el directorio compilation terminated. make: * [core/utils/src/RClStl_tmp.o] Error 1 rm core/utils/src/RClStl_tmp.cxx I don´t know what to do Please, help me thank you in advance

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Delete U3 System from SanDisk Cruzer USB Drive [closed]

    - by Petriborg
    Possible Duplicate: How do I get rid of “U3 System” on my USB drive? SanDisk Cruzer come with a "U3" malware built into them. Its intended for windows, but on Ubuntu it shows up as a "U3 System" CD on the desktop and as /dev/scd1 -> sr1 My question - How do I permanently delete this from the device without windows? I'm aware of the windows program, but I don't have access to any, and in any event, I wouldn't want to insert the stick into a windows box because it automatically installs its malware to any windows box comes into contact with! A friend of mine realized you could delete the cdrom via sudo echo "1" > /sys/class/block/srXXX/device/delete But it will come back if you reboot.

    Read the article

  • Define outgoing ip address when using ssh

    - by Mark
    I have an ubuntu server machine (12.04) with 4 IP addresses for different websites that require unique ssl certificates. I sometimes ssh out from this box and the box I am going to I have to tell it what IP address I will be coming from. How do I specify which of the 4 ip addresses I want to use as my outgoing IP address? If i do an ifconfig it appears that I am going out as the last ipaddress. I guess you would want to specify either the address or the interface.... Thanks in advance! -Mark

    Read the article

  • Gigaom Article on Oracle, Freescale, and the push for Java on Internet of Things (IoT)

    - by hinkmond
    Here's an interesting article that came out during JavaOne which talks about the Oracle and Freescale partnership, where we are putting Java technology onto the Freescale i.MX6 based "one box" gateway. See: Oracle and Prosyst team up Here's a quote: When it comes to connected devices, there’s still plenty of debate over the right operating system, the correct protocols for sending data and even the basics of where processing will take place — on premise or in the cloud. This might seem esoteric, but if you’re waiting for your phone to unlock your front door, that round trip to the cloud or a fat OS isn’t going to win accolades if you’re waiting in the rain. With all of this in mind, Oracle and Freescale have teamed up to offer an appliance and a Java-based software stack for the internet of things. The first version of the "one box" will work in the connected smart home, but soon after that, Oracle and Freescale will develop later boxes for other industries ranging from healthcare, smart grid to manufacturing. Hinkmond

    Read the article

  • CSS3 - "connecting" 2 classes animation [closed]

    - by Nave Tseva
    I have this CSS +HTML code: <!DOCTYPE HTML> <html> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> <title>What</title> <style type="text/css"> #page { width: 900px; padding: 0px; margin: 0 auto; direction: rtl; position: relative; } #box1 { position: relative; width: 500px; border: 1px solid black; box-shadow: -3px 8px 34px #808080; border-radius: 20px; box-shadow: -8px 5px 5px #888888; right: 300px; top: 250px; height: 150px; -webkit-transition: all 1s; font-size: large; color: Black; padding: 10px; background: #D0D0D0; opacity: 0; } @-webkit-keyframes myFirst { 0% { right: 300px; top: 150px; background: #D0D0D0; opacity: 0; } 100% { background: #909090; ; right: 300px; top: 200px; opacity: 1; } } #littlebox1 { top: 200px; position: absolute; display: inline-block; } .littlebox1-sentence { font-size: large; padding-bottom: 15px; padding-top: 15px; padding-left: 25px; padding-right: 10px; background: #D0D0D0; border-radius: 10px; -webkit-transition: background .25s ease-in-out; } #littlebox1:hover ~ #box1 { -webkit-transition: all 0s; background: #909090;; right: 300px; top: 200px; -webkit-animation: myFirst 1s; -webkit-animation-fill-mode: initial; opacity: 1; } .littlebox1-sentence:hover { background: #909090; } .littlebox1-sentence:hover + .triangle { border-right: 50px solid #909090; } .triangle { position: relative; width: 0; height: 0; border-right: 50px solid #D0D0D0; border-top: 24px solid transparent; border-bottom: 24px solid transparent; right: 160px; -webkit-transition: border-right .25s ease-in-out; } .triangle:hover { border-right:50px solid #909090; } </style> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script> <script> $(function() { $('.littlebox1-sentence').hover(function() { $(this).css('background', '#909090'); $('.triangle').css('border-right', '50px solid #909090'); }); </script> <script> $(function() { $('.triangle').hover(function() { $(this).css('border-right', '50px solid #909090'); $('.littlebox1-sentence').css('background', '#909090'); }); </script> </head> <body dir="rtl"> <div id="page"> <div id="littlebox1" class="littlebox1-sentence">put your mouse here</div><div id="littlebox1" class="triangle"> </div> <div id="box1"> </div> </div> </body> </html> Live example you will find here: http://jsfiddle.net/FLe4g/12/ The problem here that something here wrong in the second jquery code. I want that every time that I put the mouse on the box, or on the triangke they both will change ther color together. when I put the mouse on the box it works fine, but when I put the mouse on the triangle it don't work. Any suggestions how to fix this code?

    Read the article

  • Public versus private inheritance when some of the parent's methods need to be exposed?

    - by Vorac
    Public inheritance means that all fields from the base class retain their declared visibility, while private means that they are forced to 'private' within the derived class's scope. What should be done if some of the parent's members (say, methods) need to be publicly exposed? I can think of two solution. Public inheritance somewhat breaks encapsulation. Furthermore, when you need to find out where is the method foo() defined, one needs to look at a chain of base classes. Private inheritance solves these problems, but introduces burden to write wrappers (more text). Which might be a good thing in the line of verbosity, but makes changes of interfaces incredibly cumbersome. What considerations am I missing? What constraints on the type of project are important? How to choose between the two (I am not even mentioning 'protected')? Note that I am targeting non-virtual methods. There isn't such a discussion for virtual methods (or is there).

    Read the article

  • Styling ASP.NET MVC Error Messages

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/styling-asp.net-mvc-error-messages.aspxOff the cuff, it may look like you’re stuck with the presentation of your error messages (model errors) in ASP.NET MVC. That’s not the case, though. You actually have quite a number of options with regard to styling those boogers. Like many of the helpers in MVC, the Html.ValidationMessageFor helper has multiple prototypes. One of those prototypes lets you pass a dictionary, or anonymous object, representing attribute values for the resulting markup. @Html.ValidationMessageFor( m => Model.Whatever, null, new { @class = “my-error” }) By passing the htmlAttributes parameter, which is the last parameter in the call to the prototype of Html.ValidationMessageFor shown above, I can style the resulting markup by associating styles to the my-error css class.  When you run your MVC project and view the source, you’ll notice that MVC adds the class field-validation-valid or field-validation-error to a span created by the helper. You could actually just style those classes instead of adding your own…it’s really up to you. Now, what if you wanted to move that error message around? Maybe you want to put that error message in a box or a callout. How do you do that? When I first started using MVC, it didn’t occur to me that the Html.ValidationMessageFor helper just spits out a little bit of markup. I wanted to put the error messages in boxes with white backgrounds, our site originally had a black background, and show a little nib on the side to make them look like callouts or conversation bubbles. Not realizing how much freedom there is in the styling and markup, and after reading someone else’s post, I created my own version of the ValidationMessageFor helper that took out the span and replaced it with divs. I styled the divs to produce the effect of a popup box and had a lot of trouble with sizing and such. That’s a really silly and unnecessary way to solve this problem. If you want to move your error messages around, all you have to do is move the helper. MVC doesn’t appear to care where you put it, which makes total sense when you think about it. Html.ValidationMessageFor is just spitting out a little markup using a little bit of reflection on the name you’re passing it. All you’ve got to do to style it the way you want it is to put it in whatever markup you desire. Take a look at this, for example… <div class=”my-anchor”>@Html.ValidationMessageFor( m => Model.Whatever )</div> @Html.TextBoxFor(m => Model.Whatever) Now, given that bit of HTML, consider the following CSS… <style> .my-anchor { position:relative; } .field-validation-error {    background-color:white;    border-radius:4px;    border: solid 1px #333;    display: block;    position: absolute;    top:0; right:0; left:0;    text-align:right; } </style> The my-anchor class establishes an anchor for the absolutely positioned error message. Now you can move the error message wherever you want it relative to the anchor. Using css3, there are some other tricks. For example, you can use the :not(:empty) selector to select the span and apply styles based upon whether or not the span has text in it. Keep it simple, though. Moving your elements around using absolute positioning may cause you issues on devices with screens smaller than your standard laptop or PC. While looking for something else recently, I saw someone asking how to style the output for Html.ValidationSummary.  Html.ValidationSummery is the helper that will spit out a list of property errors, general model errors, or both. Html.ValidationSummary spits out fairly simple markup as well, so you can use the techniques described above with it also. The resulting markup is a <ul><li></li></ul> unordered list of error messages that carries the class validation-summary-errors In the forum question, the user was asking how to hide the error summary when there are no errors. Their errors were in a red box and they didn’t want to show an empty red box when there aren’t any errors. Obviously, you can use the css3 selectors to apply different styles to the list when it’s empty and when it’s not empty; however, that’s not support in all browsers. Well, it just so happens that the unordered list carries the style validation-summary-valid when the list is empty. While the div rendered by the Html.ValidationSummary helper renders a visible div, containing one invisible listitem, you can always just style the whole div with “display:none” when the validation-summary-valid class is applied and make it visible when the validation-summary-errors class is applied. Or, if you don’t like that solution, which I like quite well, you can also check the model state for errors with something like this… int errors = ViewData.ModelState.Sum(ms => ms.Value.Errors.Count); That’ll give you a count of the errors that have been added to ModelState. You can check that and conditionally include markup in your page if you want to. The choice is yours. Obviously, doing most everything you can with styles increases the flexibility of the presentation of your solution, so I recommend going that route when you can. That picture of the fat guy jumping has nothing to do with the article. That’s just a picture of me on the roof and I thought it was funny. Doesn’t every post need a picture?

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >