Search Results

Search found 3363 results on 135 pages for 'gorm embedded'.

Page 51/135 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Combine regular expressions for splitting camelCase string into words

    - by stou
    I managed to implement a function that converts camel case to words, by using the solution suggested by @ridgerunner in this question: Split camelCase word into words with php preg_match (Regular Expression) However, I want to also handle embedded abreviations like this: 'hasABREVIATIONEmbedded' translates to 'Has ABREVIATION Embedded' I came up with this solution: <?php function camelCaseToWords($camelCaseStr) { // Convert: "TestASAPTestMore" to "TestASAP TestMore" $abreviationsPattern = '/' . // Match position between UPPERCASE "words" '(?<=[A-Z])' . // Position is after group of uppercase, '(?=[A-Z][a-z])' . // and before group of lowercase letters, except the last upper case letter in the group. '/x'; $arr = preg_split($abreviationsPattern, $camelCaseStr); $str = implode(' ', $arr); // Convert "TestASAP TestMore" to "Test ASAP Test More" $camelCasePattern = '/' . // Match position between camelCase "words". '(?<=[a-z])' . // Position is after a lowercase, '(?=[A-Z])' . // and before an uppercase letter. '/x'; $arr = preg_split($camelCasePattern, $str); $str = implode(' ', $arr); $str = ucfirst(trim($str)); return $str; } $inputs = array( 'oneTwoThreeFour', 'StartsWithCap', 'hasConsecutiveCAPS', 'ALLCAPS', 'ALL_CAPS_AND_UNDERSCORES', 'hasABREVIATIONEmbedded', ); echo "INPUT"; foreach($inputs as $val) { echo "'" . $val . "' translates to '" . camelCaseToWords($val). "'\n"; } The output is: INPUT'oneTwoThreeFour' translates to 'One Two Three Four' 'StartsWithCap' translates to 'Starts With Cap' 'hasConsecutiveCAPS' translates to 'Has Consecutive CAPS' 'ALLCAPS' translates to 'ALLCAPS' 'ALL_CAPS_AND_UNDERSCORES' translates to 'ALL_CAPS_AND_UNDERSCORES' 'hasABREVIATIONEmbedded' translates to 'Has ABREVIATION Embedded' It works as intended. My question is: Can I combine the 2 regular expressions $abreviationsPattern and camelCasePattern so i can avoid running the preg_split() function twice?

    Read the article

  • .Net Architecture challenge: The Change-prone Frankestein Model

    - by SDReyes
    Good Morning SO! We've been scratching our heads with with this interesting scenario at the office, and we're anxious to hear your ideas and approaches: We have a database, whose schema is prone to changes -lets call it Prony-. (is used to store configuration parameters for embedded devices. so if the embedded devices guy need a new table, property or relationship for the model, he should be able to adapt the schema in a easy way -happens so often- ). Prony needs a web interface to create/edit its data. We have another database containing data that also need to be loaded to the devices, after making some transformations - lets call this one Oddy- (this data it's generated by an already existent administrative web application). Finally we have Tracy, a server that communicates our DBs and our embedded devices. She should to auto-adapt herself, to our dbs schema changes and serialize the data to the devices. Nice puzzle, don't think so? : ) Our current candidates: Rady: The fast Lets create some views in Prony that make the data transformation from Oddy. then use DynamicData (or some RAD tool) to create/update a simple web interface for Prony (so he can even consult the transformated data from coming from Prony : ). About Tracy, she will need to be recompiled to update her DB schema (Entity framework should work) and use Reflection to explore recursively the schema and serialize data. Cons: We would have to recompile Tracy and the Prony's web interface. What do you think of the candidate(s)? What would you do?

    Read the article

  • How to syntax-highlight XML in CDATA elements in Vim?

    - by Jim Hurne
    Vim's syntax highlighting for XML/XSL is great, except it turns off all syntax highlighting in CDATA regions. Is there a way to turn on syntax highlighting on in CDATA regions? At work, we have a lot of XSL code embedded within other XML documents. It would be great if I could get all of the goodness of XML editing for the embedded XSL code as well without having to temporarily remove the CDATA tags, or copy the CDATA content into a temporary file. Example: <root> <someTag><![CDATA[ <xsl:template match="/"> <!-- XSL content here --> </xsl:template> ]]> </someTag> </root> Note that the name of the tag (in the example, someTag) containing the content could be anything. We also sometimes embed Javascript inside CDATA regions as well, and again, it would be nice to turn on Javascript syntax highlighting for those regions. Again, the tag the data is embedded in is usually arbitrary and can be anything.

    Read the article

  • Suggestion on UPnP presentation

    - by Microkernel
    Hi all, I am working on an embedded device (bit higher end in terms of system resources but still an embedded one) which has lot of media content in it. I am trying to make it UPnP complaint and want to be able to control this device using a UPnP complaint control point/companion device like ipad. The step towards this is to be able to present the playlist content to the user. We thought of using HTML5 as a format to use. But as I am a noob in web technologies, I am not sure whats the best way to produce and present rich dynamic web pages. The content thats presented are video/audio listing that device can play and want this listing to be generated using the user's input criteria. So, what would be the best way to generate these dynamic pages which are rich and rendered as HTML5 pages. (looked at XML & XSLT, but there seems to be some limitations in how well one can use XSLT from some rewviews I saw). Thanks Microkernel PS: This may be silly or very basic as I am a embedded systems developer and not even a noob in web technologoes...

    Read the article

  • Overlapping audio in IE when I show/hide videos

    - by user1062448
    I have a thumbnail list with links to individual videos. Everything works fine in all browsers but IE. In IE, if I start a video and (without slicking pause or stop) click on the thumbnail for the next video, the audio continues playing. In other words, the audio for both videos plays at once. Any suggestions? HTML: <ul class="videoButtons"> <li><a class="vidButton" href="javascript:void(0)" id="1" ><img src="images/videoPics/vid1Thumb.jpg" /><br />video title</a></li> <li><a class="vidButton" href="javascript:void(0)" id="2" ><img src="images/videoPics/vid2Thumb.jpg" /><br />video title</a></li> <li><a class="vidButton" href="javascript:void(0)" id="3" ><img src="images/videoPics/vid3Thumb.jpg" /><br />video title</a></li> </ul> <div class="box" id="video1"> <!--flv embedded object - FLVPlayer--> </div> <div class="box" id="video2"> <!--flv embedded object - FLVPlayer1--> </div> <div class="box" id="video3"> <!--flv embedded object - FLVPlayer2--> </div> Show/Hide code: $(".vidButton").click(function() { var buttonID = $(this).attr('id'); // get ID of the button clicked var video = $('#'+'video'+buttonID); // add ID number to video $('.box').hide(); // hide all other divs video.fadeTo("slow", 1); // show video }); }); // video objects swfobject.registerObject("FLVPlayer"); swfobject.registerObject("FLVPlayer1"); swfobject.registerObject("FLVPlayer2");

    Read the article

  • Sorting text and attached images in MFMailCompositeViewController?

    - by eOgas
    In the app I'm currently writing I'd like to populate the message body of an e-mail with a combination of text and images. It took me forever to find out that in order to get an embedded image, you had to have bold tags in the message body (...uhhh, yeah), otherwise the image just shows up as an attachment. But now I have the problem that all of the images just go to the end of the body, and I can't programmatically put text after or in between any of the attachments. So far I've tried: Adding images as part of the body string using tags and a base64 string. This would have worked, but most e-mail clients reject images embedded in this manner. Using normal tags with a references to the attached files, using the assigned filenames. Didn't work at all. Attaching images normally, but also attaching blocks of text to the e-mail. The text is not embedded in the same manner as the images. Turns out to be an attached txt file on the receiving end. Apple has restricted their MFMailComposieViewController class to the point of ridiculousness, but I know there has to be a way to do this, because they add in their stupid "Sent from my *apple device name here*" message at the end of every e-mail. So does anyone have any ideas?

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • What happens when the USB key or SD card I've installed VMware ESXi on fails?

    - by ewwhite
    An SD (SDHC) card installed in an HP ProLiant DL380p Gen8 server running VMware ESXi just failed. I encountered some rather ominous looking messages on the vCenter console and in the HP ProLiant ILO event log... Lost connectivity to the device ... backing the boot filesystem. As a result, host configuration changes will not be saved to persistent storage. Embedded Flash/SD-CARD: Error writing media 0, physical block 848880: Stack Exception. VMware advocates the use of USB and SD (SDHC) boot devices for ESXi. It was one of the main reasons the smaller footprint ESXi was developed (versus the older ESX). I've spent much time highlighting the differences between ESXi's installable and embedded modes to coworkers and clients. However, these failures do seem to happen. In this case, this is my third instance. Luckily, this is a vSphere cluster with SAN storage. What steps should be taken to remediate this failure?

    Read the article

  • Campus Network Design - Firewalls

    - by user3081239
    I am designing a campus network, and the design looks like this: LINX is The London Internet Exchange and JANET is Joint Academic Network. My goal is an almost-fully redundant with high availability, because it will have to support about 15k people, including academic staff, administrative staff and students. I have read some documents in the process , but I am still not sure about some aspects. I want to dedicate this one to firewalls: what are the driving factors in deciding to employ a dedicated firewall, instead of an embedded firewall in the border router? From what I can see, an embedded firewall has these advantages: Easier to maintain Better integration One less hop Less space requirement Cheaper Dedicated firewall has the advantage of being modular. Is there anything else? What am I missing?

    Read the article

  • HP Compaq T20 Thin Clients & Windows Server 2008 R2: RDP Disconnects instantly.

    - by sinni800
    Hello, I have some HP Compaq T20 Thin Clients connecting to a Windows Server 2003. Now I want to upgrade to 2008 R2, so I tested a trial installation with remote desktop in administration mode. So I try to connect my T20 to the server and... It doesn't matter if I turn off encryption or not it disconnects with an generic error instantly. The T20s have Windows CE embedded with RDP 5.2 installed. Out of curiosity I tried Windows Server 2008 (no R2) and it worked! I tried the same with a Windows 7 machine set up with- no work. I can not update the T20s to Windows NT embedded for example because they only have a low amount of flash memory. It seems the "new" version of RDP coming from Windows 7 / Windows Server 2008 R2 is completely incompatible with the older 5.2 version. People are having the same problem here: http://social.technet.microsoft.com/Forums/en-US/winserverTS/thread/700488cd-a872-47e5-85a7-595f050afc10

    Read the article

  • Hotlinking images in Outlook emails

    - by Alexander Voglund
    I want to create an email with a signature that contains an external image. For example, I would like to always include my SuperUser flair in my signature like so: http://superuser.com/users/flair/53525.png This can be done if I insert an image and select the above source. The problem occurs when I copy the image and paste it into the body of a new email. Outlook creates an embedded version and breaks the link to the original email. How can an image be pasted into Outlook and NOT be embedded (ie: the link to the original image is maintained?)

    Read the article

  • How can I send an email from Mail.app to Outlook with an attachment that does not embed into the email body?

    - by JAG2007
    I'm using Mail.app (on Mac OS X 10.6) and when I send an email to users on PC Outlook, with an attached image, they get the email as an image embedded into the body, not as an attachement. I even tried clicking "view as icon" before sending the attachment from Macmail, but that made no difference. I also tried this myself, sending from Mail.app over to my PC's Outlook, and I do get that same problem. In Outlook the image is not coming through as an attachment, but as an image embedded into the body of the email. The reason this is an issue primarily is because the user is then unable to click "save as" and has to actually copy and paste it into some other program, which means the file is converted from jpg or png to the bmp format. But beyond that, most of my recipients don't even know how to copy and paste it into another program to save it that way anyway. They need the "save attachment as" functionality.

    Read the article

  • PDF printer which correctly embeds EPS into PDF

    - by Alexey Popkov
    I need to convert to PDF a Word document containing embedded vector EPS images (by printing to PDF printer - I use Word 2003). Several years ago I tested some of commercial and free PDF printers and found none, with except to Acrobat Distiller, which embeds in the generated PDF file real PostScript content of the EPS image instead of the preview showed by Word. Has the situation changed from that time? Do you know any free or commercial PDF printer which handles embedded EPS correctly? UPDATE Good thread about EPS handling in different versions of Word: http://forums.adobe.com/thread/439881

    Read the article

  • How to install latest version of imagick on centos 5.8 64bit using bash

    - by user57221
    How can I download and install latest version of imagick on centos 5.8 64bit using bash for php 5.4. >yum info php Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.ellogroup.com * epel: mirror01.th.ifl.net * extras: mirror.ellogroup.com * updates: mirror.ellogroup.com Installed Packages Name : php Arch : x86_64 Version : 5.4.3 Release : 1.el5.remi Size : 8.8 M Repo : installed Summary : The PHP HTML-embedded scripting language. (PHP: Hypertext Preprocessor) URL : http://www.php.net/ License : PHP Description: PHP is an HTML-embedded scripting language. PHP attempts to make it : easy for developers to write dynamically generated webpages. PHP also : offers built-in database integration for several commercial and : non-commercial database management systems, so writing a : database-enabled webpage with PHP is fairly simple. The most common : use of PHP coding is probably as a replacement for CGI scripts. : : The php package contains the module which adds support for the PHP : language to Apache HTTP Server.

    Read the article

  • ARM Debian (squeeze) USB driver with mismatch 3.3.3 kernel but /lib/modules/2.6.36

    - by frank
    Hei guys, my sheevaplug embedded server works fine, but when I wanted to use USB, the device gets not attached to /dev/tty/USB0 lsusb shows correctly: Bus 001 Device 002: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port an modprobe usbserial raises: FATAL: Could not load /lib/modules/3.3.3/modules.dep: No such file or directory in the /lib/modules/ Folder there is instead a 2.6.36-Folder uname -r gives 3.3.3 How can I overcome this mismatch? Can I create a symlink? I can't flash this embedded device since it is deployed somewhere, only ssh? Please advise!

    Read the article

  • Multimedia PDF (Audio, Video and Links) That works on Desktop and iOS

    - by Keefer
    We've got a client that wants to have a PDF that has embedded audio, video and links. Using Acrobat Pro 9.x I've been able to embed all three no problem. They all work/playback if I use Acrobat Pro/Acrobat Reader. But don't show up in OS X's Preview at all. They also don't show up in iOS. Links work everywhere, but no multimedia. So I tried creating a similar document via Apple's iBooks Author, then exported as a PDF. Links work, but multimedia doesn't seem to work anywhere. Is there any way to make a PDF that works universally with embedded links and multimedia?

    Read the article

  • PHP 5.2 to 5.3 not upgrading, no errors

    - by Webnet
    I'm following this guide: http://atik97.wordpress.com/2010/06/12/how-to-upgrade-to-php-5-3-in-ubuntu-9-10/ I've done all the steps, but it's still showing php 5.2.6 - any ideas? I have also tried -cgi instead of -cli, neither have any effect. update I've tried rebooting the server to see if that would have any effect and unfortunately it didn't update Output of dpkg -l *php*: Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Cfg-files/Unpacked/Failed-cfg/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-=============================================-=============================================-========================================================================================================== un libapache2-mod-php4 <none> (no description available) ii libapache2-mod-php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (Apache 2 module) un libapache2-mod-php5filter <none> (no description available) ii php-pear 5.2.6.dfsg.1-3ubuntu4.6 PEAR - PHP Extension and Application Repository un php4-cli <none> (no description available) un php4-dev <none> (no description available) un php4-mysql <none> (no description available) un php4-pear <none> (no description available) ii php5 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (metapackage) ii php5-cgi 5.2.6.dfsg.1-3ubuntu4.6 server-side, HTML-embedded scripting language (CGI binary) ii php5-cli 5.2.6.dfsg.1-3ubuntu4.6 command-line interpreter for the php5 scripting language ii php5-common 5.2.6.dfsg.1-3ubuntu4.6 Common files for packages built from the php5 source ii php5-curl 5.2.6.dfsg.1-3ubuntu4.6 CURL module for php5 un php5-dev <none> (no description available) ii php5-gd 5.2.6.dfsg.1-3ubuntu4.6 GD module for php5 ii php5-imap 5.2.6-0ubuntu5.1 IMAP module for php5 un php5-json <none> (no description available) ii php5-mcrypt 5.2.6-0ubuntu2 MCrypt module for php5 ii php5-mysql 5.2.6.dfsg.1-3ubuntu4.6 MySQL module for php5 un php5-mysqli <none> (no description available) ii php5-xsl 5.2.6.dfsg.1-3ubuntu4.6 XSL module for php5 un phpapi-20060613+lfs <none> (no description available) ii phpmyadmin 4:3.1.2-1ubuntu0.2 MySQL web administration tool update The following commands and their outputs: grep php53 /etc/apt/sources.list deb http://php53.dotdeb.org stable all deb-src http://php53.dotdeb.org stable all apt-cache search -f "libapache2-mod-php5" http://pastebin.com/XNXdsXYC update I've updated the question with more details on installed packages.

    Read the article

  • Gmail Embed Image Icon

    - by Jason M
    I'm using the Gmail labs feature that allows me to embed images into emails. The feature works fine and when I send the email, I believe it actually does embed the image and the recipient sees it as such. But when I look in my sent box and bring up the email, instead of the embedded image, I see an icon in its place. Is this correct? Why is this like this? I want to be able to see the embedded image just like when the email was composed.

    Read the article

  • How do I prevent my swf files being hotlinked, downloaded etc.

    - by undefined
    I have swf files that are embedded in a PHP page using SWFObject. These swf files are in the same directory as my PHP files. for example www.myurl.com/index.php embeds www.myurl.com/flashfile.swf, index.php and flashfile.swf are in the same directory. However I want to prevent people from being able to type in www.myurl.com/flashfile.swf and viewing the swf. I want the browser to deny access to this file unless it has been embedded by the PHP file. Should I move my swfs to another folder and protect this folder somehow - is this with the .htaccess file? I am running Apache on a linux machine. While my main concern is for swf files I would like to protect graphics used on the site too. all help appreciated thanks

    Read the article

  • postMessage to PDF in an iFrame

    - by Linus
    Here's my situation. I had a webpage with an embedded PDF form. We used a basic object tag (embed in FF) to load the PDF file like this: <object id="pdfForm" height="100%" width="100%" type="application/pdf" data="..url"></object> On this webpage was an Html Save button that would trigger some Javascript which used the postMessage API of the embedded object to execute javascript embedded in the PDF. Basically, that code looked like this: function save() { sendMessage(["submitForm"]); } function sendMessage(aMessage) { pdfObject = document.getElementById("pdfForm"); if (typeof(pdfObject) == "undefined") return; if (typeof (pdfObject.postMessage) == "undefined") return; pdfObject.postMessage(aMessage); } This all was working beautifully. Except we ran into an issue with Firefox so that we need to embed the PDF using iFrame, instead of the object tag. So now, the PDF is embeded using this code: <iframe id="pdfWrapper" src="..someUrl" width="100%" height="800px" frameborder="0"></iframe> Unfortunately, with this code, the javascript for posting a message no longer works, and I can't really figure out how to get access to the pdf object anymore so that I can access the postMessage api. Using fiddler or the chome javascript debugger, it is clear that within the iframe, the browser is automatically generating an embed tag (not an object tag), but that does not let me access the postMessage API. This is the code I'm trying which doesn't work: function sendMessage(aMessage) { var frame = document.getElementById("pdfWrapper"); var doc = null; if (frame.contentDocument) doc = frame.contentDocument; else if (frame.contentWindow) doc = frame.contentWindow.document; else if (frame.document) doc = frame.document; if (doc==null || typeof(doc) == "undefined") return; var pdfObject = doc.embeds[0]; if (pdfObject==null || typeof (pdfObject.postMessage) == "undefined") return; pdfObject.postMessage(aMessage); } Any help on this? Sorry for the long question. EDIT: I've been asked to provide samples in code so that people can test whether the messaging works. Essentially, all you need is any PDF with this javascript embedded. function myOnMessage(aMessage) { app.alert("Hello World!"); } function myOnDisclose(cURL, cDocumentURL) { return true; } function myOnError(error, aMessage) { app.alert(error); } var msgHandlerObject = new Object(); msgHandlerObject.onMessage = myOnMessage; msgHandlerObject.onError = myOnError; msgHandlerObject.onDisclose = myOnDisclose; msgHandlerObject.myDoc = this; this.hostContainer.messageHandler = msgHandlerObject; I realize you need Acrobat pro to create PDFs with javascript, so to make this easier, I posted sample code--both working and non working scenarios--at this url: http://www.filedropper.com/pdfmessage You can download the zip and extract it to /inetpub/wwwroot if you use Windows, and then point your browser to either the works.htm or fails.htm. Thanks for any help you can give.

    Read the article

  • Oracle Fusion Applications: Changing the Game

    - by kellsey.ruppel(at)oracle.com
    Originally posted in the Oracle Profit Magazine, November 2010 Edition. When the order processing system red-flags a customer's credit status, the IT department doesn't get the customer's call. When a supplier misses a delivery date for a key automotive assembly, it's not the CIO who has to answer for the error. Knowledge workers (known in IT circles as "users") are on the front lines when an exception occurs in an established business process. They're also the ones who study sales trends to decide when to open a new store in an up-and-coming neighborhood, which products are most profitable, how employee skill sets are evolving, and which suppliers are most efficient. In short, knowledge workers are masters of business as unusual. Traditional enterprise resource planning (ERP) systems and other familiar enterprise applications excel at automating, managing, and executing standard business processes. These programs shine when everything goes as planned. Life gets even trickier when a traditional application needs to be extended with a new service or an extra step is added to a business process when new products are brought to market, divisions are merged, or companies are acquired. Monolithic applications often need the IT department to step in and make the necessary adjustments--incurring additional costs and delays. Until now. When Oracle unveiled the much-anticipated family of Oracle Fusion Applications at Oracle OpenWorld in September 2010, knowledge workers in particular had a lot to cheer about. Business users will soon have ready access to analytical information and collaboration tools in the context of what they are working on, so they can make better decisions when problems or opportunities arise. Additionally, the Oracle Fusion Applications platform will make it easy for business users to tweak processes, create new capabilities, and find information, often without the need for IT department assistance and while still following company guidelines. And IT leaders will be happy to hear about new deployment options, guided implementation and setup tools, and cost-saving management capabilities. Just as important, the underlying technologies in Oracle Fusion Applications will allow organizations to choose among their existing investments and next-generation enterprise applications so they can introduce innovations at a pace that makes the most business and financial sense. "Oracle Fusion Applications are architected so you don't have to do rip and replace," says Jim Hayes, managing director of the consulting firm Accenture. "That's very important for creating a business case that will get through the steering committee and be approved by the board. It shows you can drive value and make a difference in the near term." For these and other reasons, analysts and early adopters are calling Oracle Fusion Applications a game changer for enterprise customers. The differences become apparent in three key areas: the way we innovate, work, and adopt technology. Game Changer #1: New Standard for InnovationChange is a constant challenge for most businesses, whether the catalysts are market dynamics, new competition, or the ever-expanding regulatory environment. And, in an ongoing effort to differentiate, business leaders are constantly looking for new ways to do business, serve constituents, and bring new products and services to market. In addition, companies face significant costs to keep their applications up-to-date. For example, when a company adds new suppliers to a procurement system, the IT shop typically has to invest time, effort, and even consulting fees for custom integrations that allow various ERP systems to communicate with each other. Oracle Fusion Applications were built on Web services and a modular SOA foundation to ease customizations and integration activities among all applications--whether from Oracle or another vendor. Interfaces and updates written in ubiquitous Java, rather than a proprietary coding language, allow organizations to tap into existing in-house technical skills rather than seek expensive outside specialists. And with SOA, organizations can extend a feature set or integrate with other SOA environments by combining Web services such as "look up customer" into a new business process managed by the BPEL orchestration engine. Flexibility like this has long-term implications. "Because users capture these changes at a higher metadata layer, not in the application's code, changes and additions are protected even as new versions of Oracle Fusion Applications are released," says Steve Miranda, senior vice president of applications development at Oracle. "This is a much more sustainable approach because you don't incur costly customizations that prevent upgrades and other innovations." And changes are easier to make: if one change is made in the metadata, that change is automatically reflected throughout the application interface, business intelligence, business process, and business logic. Game Changer #2: New Standard for WorkBoosting productivity comes down to doing the basics right: running business processes more efficiently and managing exceptions more effectively, so users can accomplish more in the course of a day or spend more quality time with the most profitable customers. The fastest way to improve process efficiency is to reduce the number of steps it takes to execute common tasks, such as ordering office equipment from an internal procurement system. Oracle Fusion Applications will deliver a complete role-based user experience with business intelligence and collaboration capabilities provided in the context of the work at hand. "We created every Oracle Fusion Applications screen by asking 'What does the user need to know?' 'What does he or she need to do?' and 'Who do they need to work with to get the job done?'" Miranda explains. So when the sales department heads need new laptops, the self-service procurement screen will not only display a list of approved vendors and configurations, but also a running list of reviews by coworkers who recently purchased the various models. Embedded intelligence may also display prevailing delivery lead times based on actual order histories, not the generic shipping dates vendors may quote. The pervasive business intelligence serves many other business activities across all areas of the enterprise. For example, a manager considering whether to promote a direct report can see the person's employee profile, with a salary history, appraisal summaries, and a rundown of skills and training. This approach to business intelligence also has implications for supply chain management. "One of the challenges at Ingersoll Rand is lack of visibility in our supply chain," says Mike Macrie, global director of enterprise applications for global industrial firm Ingersoll Rand. "Oracle Fusion Applications are going to provide the embedded intelligence to give us that visibility and give us the ability to analyze those orders at any point in our supply chain." Oracle Fusion Applications will also create a "role-based user experience" that displays a work list of events that need attention, based on user job function. Role awareness guides users with daily lists of action items and exceptions. So a credit manager may see seven invoices with discounts that are about to expire or 12 suppliers that have been put on hold because credit memos are awaiting approval. Individualization extends to the search capabilities of Oracle Fusion Applications. The platform uses Web-style search screens powered by an Oracle enterprise search engine, with a security framework that filters search results so individuals will only see the internal information they're authorized to access. A further aid to productivity is Oracle Fusion Applications' integration with Web 2.0 collaboration and social networking resources for business environments. Hover-over text will reveal relevant contact information whenever the name of a person appears in an Oracle Fusion Application. Users can connect via an online chat, phone call, or instant message without leaving the main application, reducing the time required for an accounts payable staffer to resolve a mismatch between an invoiced charge and the service record, for example. Addresses of suppliers, customers, or partners will also initiate hover-over text to show contact details and Web-based maps. Finally, Oracle Fusion Applications will promote a new way of working with purpose-driven communities that can bring new efficiencies to everything from cultivating sales leads to managing new projects. As soon as a lead or project materializes, the applications will automatically gather relevant participants into an online community that shares member contact information, schedules, discussion forums, and Wiki pages. "Oracle Fusion Applications will allow us to take it to the next level with embedded Web 2.0 tools and the embedded analytics," says Steve Printz, CIO and vice president, supply chain management, at window-and-door manufacturer Pella. "[This] allows those employees today who are processing transactions to really contribute to the success of the company and become decision-makers." Game Changer #3: New Standard for Technology AdoptionAs IT becomes a dominant component of how businesses run and compete, organizations need to lower the cost of implementing applications and introducing new application features. In the past, rolling out new code often required creating a test bed system, moving beta code to a separate system for user feedback, and--once all the revisions were made--moving version one of the software onto production systems, where business users could finally get the needed new features. Oracle Fusion Applications will use a dedicated setup manager application to streamline this process. First, the setup manager will help scope out the project, querying users about their requirements. "From those questions and answers we determine the steps and the order of those steps that will enable that task," Miranda says. Next, system utilities will assign tasks to owners, track completion status, and monitor the overall status of a programming effort. Oracle Fusion Applications can then recommend Web services that allow users to migrate setup choices and steps across all the various deployments of the application. Those setup capabilities automate the migration from test systems to production systems, as well as between different business units that may be using the same application. "The self-service ability of the setup manager helps business users change setups with very little intervention from the IT team," says Ravi Kumar, vice president at IT services company Infosys. "That to me is a big difference from how we've viewed enterprise applications before." For additional flexibility, organizations will be able to adopt Oracle Fusion Applications modules in either of two modes: a single-instance alternative uses one database for all Oracle Fusion Applications, while a "pillar mode" creates separate databases to underpin each application. This means IT departments running any one of Oracle's applications or even third-party applications can plug Oracle Fusion Applications modules into their environment and see additional business value created on top of their existing systems. And Oracle Fusion Applications offer a hybrid approach to deployment. The applications are all software-as-a-service-ready, so customers can choose on-premises, public or private cloud, or a combination of these to suit their business needs. It's that combination of flexibility and a roadmap for the future that may be the biggest game changer of all. "The Oracle Fusion Applications architecture allows us to migrate our company at a pace that's consistent with our business strategy, whereas before we might have had to do it with a massive upgrade," says Macrie of Ingersoll Rand. "We're looking forward to that architecture to really give us more flexibility in how we migrate over time." For More InformationUser Input Key to the Success of Oracle Fusion ApplicationsTransforming Coexistence into Strategic ValueUnder the HoodOracle Fusion ApplicationsOracle Service-Oriented Architecture  

    Read the article

  • How To Watch Live Streaming of Oscars 2010 (Academy Awards)

    - by Gopinath
    The Academy Awards or more popularly known as Oscars for this year will go live on 7th March 2010 (8PM ET) at the Kodak Theater (Hollywood), Los Angeles, California. It’s a star studded event every movie lover wish to follow and watch live. We at Tech Dreams always love to write about live streaming of popular events happening across the globe. Here is our guide to follow Oscars 2010 Oscars 2010 Live Streams Last year we did not have many choices to view the Oscars online. But this year there are plenty of them available from the best of the media power houses APLive Oscars coverage on livestream.com (embedded below) Oscars.com – The Official Web Site of Academy Awards Oscars.org Live Streaming Academy Awards – Official Live Steaming Channel on livestream.com(embedded below) APLive Oscars coverage on Facebook AP Live Oscars Red Carpet Coverage Academy Awards – Live Coverage Websites To View Highlights & Exclusive Clips Of Oscars 2010 If you miss to catch the live streaming of Oscars 2010, here are few sites you can check to view video highlights of the entire event.  Few websites like Hulu have access to exclusive moments. Oscar’s Official YouTube Channel Hulu Award Season 2010 coverage Oscars 2010  Date and Time Oscars 2010 will begin at on 7th March  Sunday 8PM EST in California. The local time in India will be around 9:30 AM on Monday. Here is list of major cities and the local time at which Oscars 2010 are going to start Date & Time California March 7th, Sunday 20:00 Adelaide March 8th, Monday 14:30 Bangkok March 8th, Monday 11:00 Beijing March 8th, Monday 12:00 Brisbane March 8th, Monday 14:00 Cape Town March 8th, Monday 06:00 Dubai March 8th, Monday 08:00 Frankfurt March 8th, Monday 05:00 Hong Kong March 8th, Monday 12:00 Delhi/Chennai/Mumbai/Kolkata March 8th, Monday 09:30 New York March 7th, Sunday 23:00 Paris March 8th, Monday 05:00 Washington March 7th, Sunday 23:00 London March 8th, Monday 04:00 For more cities visit this link If you know any other source that provide live streaming of Oscar’s, let us hear through the comments. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Nice take on Open Source

    - by EmbeddedInsider
    I just revisited the “Micro Framework”- Microsoft’s bootable runtime, essentially an OS that allows managed code to run on small 32bit CPUs, even without Memory Management.  Things are happening http://msdn.microsoft.com/en-us/netframework/bb267253.aspx Abstract The Microsoft .NET Micro Framework is a bootable runtime module that brings the advantages of .NET programming to devices too resource-constrained to run other Microsoft embedded platforms. The benefits of developing with the .NET Micro Framework include the C# programming language, a managed execution environment, a substantial subset of the .NET libraries, and Visual Studio™ deployment and debugging. In this white paper we explain why the .NET Micro Framework is an ideal choice for embedded development and provide technical details of the platform’s Hardware Abstraction Layer (HAL) and Common Language Runtime (CLR). “Micro Framework” is an interesting product, it is very low cost, like zero. And it is largely community controlled under the Apache License.  A partner network is building, and the application environment is .NET. I have been following this for some time, and the community open source approach seems to be working.  There are new features/packages emerging, for example an F# programming language (ARGH! I am still wresting with VB and C#). Anyway, what I found most interesting was a port to Tron.  Tron is a very popular Japanese open source intuitive.  It is a very real time, very compact kernel, and is, like the Micro Framework, ‘free as beer’.  One limit on MF was it was not real time.  But the merger with Tron may eliminate that problem.  Certainly, if I were dealing with a consumer product with quantities in the millions (like a SmartGrid device, or a toy) I would seriously consider something out of this technology pool.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >