Search Results

Search found 5885 results on 236 pages for 'finally'.

Page 64/236 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Fun with RadCaptcha for ASP.NET AJAX and OCR software

    A friend of mine was evaluating OCR software and finally decided to go with FineReader. I was curious what would happen if we put the RadCaptcha control in. Will the advanced OCR manage to decode it or not? At first he showed me a test run with the RadCaptcha demo description, to get an idea of the basic output:    Naturally, the captured description text was no problem - only a few characters were misread but then corrected with the spellcheck. Next, the real test was performed:    These were only a couple of the results, but there is no need to post the rest of the tests - none of the RadCaptcha images were recognized by the OCR software. Here are the CaptchaImage settings used in the tests: Background Noise Level: Low /default value Line Noise Level: Low /default value Font Warp Factor: Low /Medium is default value...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • NTFS and NTFS-3G

    - by MestreLion
    I have a netbook with Ubuntu Netbook 10.04 and a desktop with Mint 10 (~ Ubuntu Desktop 10.10) Both of them have read/write NTFS partitions mounted via /etc/fstab. And it works fine. Ive read on net, google, forums, and several posts here, that NTFS-3G is the driver that allows you to have full access to an NTFS partition, that it is new, great, powerful, yada-yada. But... my entries use plain ntfs, no mention of -3g, and they still work perfectly as read and write. Am I already using ntfs-3g? Does 10.04 onwards use it "under the hood"? How can i check that in my system? Should my /etc/fstab entries should use "ntfs-3g" as the fs? Why some posts refer to mount ntfs, while others say mount ntfs-3g ? Im really confused about where should I use fs-type names (ntfs) or driver names (ntfs-3g?). Or is it irrelevant now, and ntfs is always an "alias" or something for ntfs-3g nowadays? Ive read some posts her, from Oct-10 and Nov-10 e that "announce" that ntfs-3g "finally arrived"... thats way post-Lucid 10.04. Could someone please undo this mess in my head, and explain the relation between ntfs and ntfs-3g, what is the current status (10.04 and 10.10), where should i use each, etc (regarding mount, fstab, etc)? Sorry for the long, confusing, redudant text... im really getting sleepy

    Read the article

  • Windows Azure Evolution &ndash; TFS Integration (WAWS Part 2)

    - by Shaun
    So this is the fourth blog post about the new features of Windows Azure and the second part of Windows Azure Web Sites. But this is not just focus on the WAWS since the function I’m going to introduce is available in both Windows Azure Web Sites and Windows Azure Cloud Service (a.k.a. hosted service). In the previous post I talked about the Windows Azure Web Sites and how to use its gallery to build a WordPress personal blog without coding. Besides the gallery we can create an empty web site and upload our website from vary approaches. And one of the highlighted feature here is that, we can make our web site integrated with a source control service, such as TFS and Git, so that it will be deployed automatically once a new commit or build available.   Create New Empty Web Site In the developer portal when creating a new web site, we can select QUICK CREATE item. This will create an empty web site with only one shared instance without any database associated. Let’s specify the URL, region and subscription and click OK. After a few seconds our website will be ready. And now we can click the BROWSE button to open this empty website. As you can see there is a welcome page available in my website even thought I didn’t upload or deploy anything. This means even though the website will be charged even before anything was deployed, similar as the cloud service (hosted service). It is because once we created a website, Windows Azure platform had arranged a hosting process (w3wp.exe) in the group of virtual machines.   Create Project in TFS Preview Service and Setup Link Currently the Windows Azure Web Sites can integrate with TFS and Git as its deployment source, and it only support the Microsoft TFS Preview Service for now. I will not deep into how to use the TFS preview service in this post but once we click into the website we had just created and then clicked the “Set up TFS publishing”, there will be a dialog helping us to connect to this service. If you don’t have an account you can click the link shown below to request one. Assuming we have already had an account of TFS service then we need to create a new project firstly. Go to your TFS service website and create a new project, giving the project name, description and the process template. Then, back to the developer portal and clicked the “Set up TFS publishing” link. In the popping up window I will provide my TFS service URL and click the “Authorize now” link. Click “Accept” button to allow my windows azure to connect to my TFS service. Then it will be back to the developer portal and list all projects in my account. Just select the one I had just created and click OK. Then our website is linking to the TFS project I specified and finally it will show similar like this below. This means the web site had been linked to the TFS successfully.   Work with TFS Preview Service in VS2010 In the figure above there are some links to guide us how to connect to the TFS server through Visual Studio 2010 and 2012 RC. If you are using Visual Studio 2012 RC, you don’t need any extension. But if you are using Visual Studio 2010 you must have SP1 and KB2581206 installed. To connect to my TFS service just open the Visual Studio and in the Team Explorer, we can add a new TFS server and paste the URL of my TFS service from the developer portal. And select the project I had just created, then it will be listed in my Team Explorer. Now let’s start to build our website. Since the website we are going to build will be deployed to WAWS, it’s NOT a cloud service, NOT a web role. So in this case we need to create a normal ASP.NET web application. For example, an ASP.NET MVC 3 web application. Next, right click on the solution and select “Add Solution to Source Control”, select the project I had just created. Then check my code in. Once the check-in finished we can see that there is a build running in the TFS server. And if we back to the developer portal, we will see in our web site deployment page there’s a deployment running. In fact, once we linked our web site to our TFS then it will create a new build definition in our TFS project. It will be triggered by each check-in and deploy to the web site we linked automatically. So that when our code had been compiled it will be published to our web site from our TFS server. Once the build and deployment finished we can see it’s now active on our developer portal. Now we can see the web site that created from my Visual Studio and deployed by my TFS.   Continue Deployment through VS and TFS A big benefit when using TFS publishing is the continue deployment. Now if I changed some code in my Visual Studio, for example update some text on the home page and check in my changes, then it will trigger an new build and deploy to my WAWS automatically. And even more, if we wanted to rollback to a previous version we can just select an existing deployment listed in the portal and click REDEPLOY at the bottom.   Q&A: Can Web Site use Storage work with a Worker Role? Stacy asked a question in my previous post, which was “can a web site use Windows Azure Storage and furthermore working with a worker role”. Since the web site is deployed on the windows azure virtual machines in data center, it must be able to use all windows azure features such as the storage, SQL databases, CDN, etc.. But since when using web site we normally have a standard ASP.NET web application, PHP website or NodeJS, the windows azure SDK was not referenced by default. But we can add them by ourselves. In our sample project let’s right click on my MVC project and clicked the “Manage NuGet packages”. And in the dialog I will search windows azure packages and select the “Windows Azure Storage” to install. Then we will have the assemblies to access windows azure storage such as tables, queues and blobs. Since I have a storage account already, let’s have a quick demo, just to list all blobs in a container. The code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using Microsoft.WindowsAzure; 7: using Microsoft.WindowsAzure.StorageClient; 8:  9: namespace WAASTFSDemo.Controllers 10: { 11: public class HomeController : Controller 12: { 13: public ActionResult Index() 14: { 15: ViewBag.Message = "Welcome to Windows Azure!"; 16:  17: var credentials = new StorageCredentialsAccountAndKey("[STORAGE_ACCOUNT]", "[STORAGE_KEY]"); 18: var account = new CloudStorageAccount(credentials, false); 19: var client = account.CreateCloudBlobClient(); 20: var container = client.GetContainerReference("shared"); 21: ViewBag.Blobs = container.ListBlobs().Select(b => b.Uri.AbsoluteUri); 22:  23: return View(); 24: } 25:  26: public ActionResult About() 27: { 28: return View(); 29: } 30: } 31: } 1: @{ 2: ViewBag.Title = "Home Page"; 3: } 4:  5: <h2>@ViewBag.Message</h2> 6: <p> 7: To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8: </p> 9: <div> 10: <ul> 11: @foreach (var blob in ViewBag.Blobs) 12: { 13: <li>@blob</li> 14: } 15: </ul> 16: </div> And then just check in the code, it will be deployed to my web site. Finally we can see the blobs in my storage.   This is just an example but it proves that web sites can connect to storage, table, blob and queue as well. So the answer to Stacy should be “yes”. The web site can use queue storage to work with worker role.   Summary In this post I demonstrated how to integrate with TFS from Windows Azure Web Sites. You can see our website can be built, uploaded and deployed automatically by TFS service. All we need to do is to provide the TFS name and select the project. Not only the Windows Azure Web Site, in this upgrade the Windows Azure Cloud Services (hosted service) can be published through TFS as well. Very similar as what we have shown below. But currently, only Microsoft TFS Service Preview can be integrated with Windows Azure. But I think in the future we can link the TFS in our enterprise and some 3rd party TFS such as CodePlex to Windows Azure.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Book Review: Getting Started With Window 8 Apps By Ben Dewey

    - by Tim Murphy
    When O’Reilly gave me an opportunity to review this book I was excited.  It gave me a reason to finally put some time into this new platform and what developers will need to learn in order to be successful. This book by Ben Dewey is only 92 pages long, so if you were looking for an in-depth treatment of Windows 8 development you will need supplemental materials.  It is also due for an update from the perspective of recent changes made by Microsoft prior to the final release of the OS and tools.  This causes a few issues if you try to run the code samples because of namespace changes. I was encouraged by the fact that the author didn’t do the typical “hello world” app.  He uses a lot of pattern based development techniques and hits many of the main topics including: Application lifecycle Charms integration Tiles Sensors The lifecycle is critical for anyone who hasn’t done mobile development before.  Limited resources on these devices mean that the OS can suspend or kill your app altogether if it decides it needs to.  He covers tombstoning which is the key to Windows 8 and Windows Phone lifecycle management. He also dedicates a chapter to marketing and distributing the application you build.  From my experience with Windows Phone development this is crucial information.  You need to know how to test your application so that it is going to pass certification and present your app so that it is going to get noticed amongst thousands of other apps. The main things that I wish had been in the book explanations of more of the common controls and more complete explanation of patterns that were implemented. In the end this book is a good foundation getting exposure to the concepts that underlie this new version of the Windows platform and how it effects developers.  It isn’t a book that I would suggest for someone just getting into development with no understanding of pattern based development. del.icio.us Tags: Windows 8,O'Reilly,Ben Dewey,Book Review,Review

    Read the article

  • Accessing and Updating Data in ASP.NET: Filtering Data Using a CheckBoxList

    Filtering Database Data with Parameters, an earlier installment in this article series, showed how to filter the data returned by ASP.NET's data source controls. In a nutshell, the data source controls can include parameterized queries whose parameter values are defined via parameter controls. For example, the SqlDataSource can include a parameterized SelectCommand, such as: SELECT * FROM Books WHERE Price > @Price. Here, @Price is a parameter; the value for a parameter can be defined declaratively using a parameter control. ASP.NET offers a variety of parameter controls, including ones that use hard-coded values, ones that retrieve values from the querystring, and ones that retrieve values from session, and others. Perhaps the most useful parameter control is the ControlParameter, which retrieves its value from a Web control on the page. Using the ControlParameter we can filter the data returned by the data source control based on the end user's input. While the ControlParameter works well with most types of Web controls, it does not work as expected with the CheckBoxList control. The ControlParameter is designed to retrieve a single property value from the specified Web control, but the CheckBoxList control does not have a property that returns all of the values of its selected items in a form that the CheckBoxList control can use. Moreover, if you are using the selected CheckBoxList items to query a database you'll quickly find that SQL does not offer out of the box functionality for filtering results based on a user-supplied list of filter criteria. The good news is that with a little bit of effort it is possible to filter data based on the end user's selections in a CheckBoxList control. This article starts with a look at how to get SQL to filter data based on a user-supplied, comma-delimited list of values. Next, it shows how to programmatically construct a comma-delimited list that represents the selected CheckBoxList values and pass that list into the SQL query. Finally, we'll explore creating a custom parameter control to handle this logic declaratively. Read on to learn more! Read More >

    Read the article

  • Help with a simple incremental backup script

    - by Evan
    I'd like to run the following incomplete script weekly in as a cron job to backup my home directory to an external drive mounted as /mnt/backups #!/bin/bash # TIMEDATE=$(date +%b-%d-%Y-%k:%M) LASTBACKUP=pathToDirWithLastBackup rsync -avr --numeric-ids --link-dest=$LASTBACKUP /home/myfiles /mnt/backups/myfiles$TIMEDATE My first question is how do I correctly set LASTBACKUP to the the the directory in /backs most recently created? Secondly, I'm under the impression that using --link-desk will mean that files in previous backups will not will not copied in later backups if they still exist but will rather symbolically link back to the originally copied files? However, I don't want to retain old files forever. What would be the best way to remove all the backups before a certain date without losing files that may think linked in those backups by currents backups? Basically I'm looking to merge all the files before a certain date to a certain date if that makes more sense than the way I initially framed the question :). Can --link-dest create hard links, and if so, just deleting previous directories wouldn't actually remove linked file? Finally I'd like to add a line to my script that compresses each newly created backup folder (/mnt/backups/myfiles$TIMEDATE). Based on reading this question, I was wondering if I could just use this line gzip --rsyncable /backups/myfiles$TIMEDATE after I run rsync so that sequential rsync --link-dest executions would find already copied and compressed files? I know that's a lot, so many thanks in advance for your help!!

    Read the article

  • How to Change the Default Application for Android Tasks

    - by Jason Fitzpatrick
    When it comes time to switch from using one application to another on your Android device it isn’t immediately clear how to do so. Follow along as we walk you through swapping the default application for any Android task. Initially changing the default application in Android is a snap. After you install the new application (new web browser, new messaging tool, new whatever) Android prompts you to pick which application (the new or the old) you wish to use for that task the first time you attempt to open a web page, check your text message, or otherwise trigger the event. Easy! What about when it comes time to uninstall the app or just change back to your old app? There’s no helpful pop-up dialog box for that. Read on as we show you how to swap out any default application for any other with a minimum of fuss. Latest Features How-To Geek ETC How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally Now Together and Complete – McBain: The Movie [Simpsons Video] Be Creative by Using Hex and RGB Codes for Crayola Crayon Colors on Your Next Web or Art Project [Geek Fun] Flash Updates; Finally Supports Full Screen Video on Multiple Monitors 22 Ways to Recycle an Altoids Mint Tin Make Your Desktop Go Native with the Tribal Arts Theme for Windows 7 A History of Vintage Transformers: Decepticons Edition [Infographic]

    Read the article

  • Where did my form go in SharePoint 2010!?

    - by MOSSLover
    So I was working on an intro to development demo for the Central NJ .Net User Group and I found a few kinks.  I opened up a form to custom in InfoPath and Quick Published it wouldn’t work.  I imed my InfoPath guru friend, Lori Gowin, she said try to run a regular publish.  The form was still not showing up in SharePoint.  I could open it and it knew my changes, but it would just not render in a browser.  So I decided to create a form from scratch without using the button customize form in the list.  That did not work, so it was google time.  Finally I found this blog post: http://qwertconsulting.wordpress.com/2009/12/22/list-form-from-infopath-2010-is-blank/.  Once I went into the configuration wizard and turned on the State Service everything worked perfectly fine.  It was great.  See I normally don’t run through the wizard and check the box to turn all the services on in SharePoint.  I usually like a leaner environment plus I want to learn how everything works.  So I guess most people had no idea what was going on in the background.  To get InfoPath to work you need the session state.  It’s doing some type of caching in the browser.  Very neat stuff.  I hope this helps one of you out there some day. Technorati Tags: InfoPath 2010,SharePoint 2010,State Service

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video

    - by pinaldave
    There are a few questions I often get asked. I wonder how interesting is that in our daily life all of us have to often need the same kind of information at the same time. Here is the example of the similar questions: How many user created tables are there in the database? How many non clustered indexes each of the tables in the database have? Is table Heap or has clustered index on it? How many rows each of the tables is contained in the database? I finally wrote down a very quick script (in less than sixty seconds when I originally wrote it) which can answer above questions. I also created a very quick video to explain the results and how to execute the script. Here is the complete script which I have used in the SQL in Sixty Seconds Video. SELECT [schema_name] = s.name, table_name = o.name, MAX(i1.type_desc) ClusteredIndexorHeap, COUNT(i.TYPE) NoOfNonClusteredIndex, p.rows FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] LEFT JOIN sys.partitions p ON p.OBJECT_ID = o.OBJECT_ID AND p.index_id IN (0,1) LEFT JOIN sys.indexes i1 ON i.OBJECT_ID = i1.OBJECT_ID AND i1.TYPE IN (0,1) WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name, p.rows ORDER BY schema_name, table_name Related Tips in SQL in Sixty Seconds: Find Row Count in Table – Find Largest Table in Database Find Row Count in Table – Find Largest Table in Database – T-SQL Identify Numbers of Non Clustered Index on Tables for Entire Database Index Levels, Page Count, Record Count and DMV – sys.dm_db_index_physical_stats Index Levels and Delete Operations – Page Level Observation What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Achieving forward compatibility with C++11

    - by mcmcc
    I work on a large software application that must run on several platforms. Some of these platforms support some features of C++11 (e.g. MSVS 2010) and some don't support any (e.g. GCC 4.3.x). I see this situation continuing on for several years (my best guess: 3-5 years). Given that, I would like set up a compatibility interface such that (to whatever degree possible) people can write C++11 code that will still compile with older compilers with a minimum of maintenance. Overall, the goal is to minimize #ifdef's as much as reasonably possible while still enabling basic C++11 syntax/features on the platforms that support them, and provide emulation on the platforms that don't. Let's start with std::move(). The most obvious way to achieve compatibility would be to put something like this in a common header file: #if !defined(HAS_STD_MOVE) namespace std { // C++11 emulation template <typename T> inline T& move(T& v) { return v; } template <typename T> inline const T& move(const T& v) { return v; } } #endif // !defined(HAS_STD_MOVE) This allow people to write things like std::vector<Thing> x = std::move(y); ... with impugnity. It does what they want in C++11 and it does the best it can in C++03. When we finally drop the last of the C++03 compilers, this code can remain as is. However, according to the standard, it is illegal to inject new symbols into the std namespace. That's the theory. My question is, practically speaking, is there any harm in doing this as a way of achieving forward compatibility?

    Read the article

  • Add physical disk to KVM virtual machine

    - by evan
    I'm setting up a file server (nas4free) as a KVM virtual machine on a Ubuntu Server 12.04 system. How do I add physical hard drives directly to the VM so they can be used by the guest (nas4free), but not the host? Specifically the hard drive I'd like to mount is /dev/sda (which is not currently mounted on the server.) So far I've found two solutions but I haven't gotten either to work. The first is from Server Fault where it's suggested to use virt-manager. I haven't gotent this to work because when I try to select an existing drive nothing is being listed. My best guess as to why this is, is because I'm using virt-manager over ssh and not connecting as root, should that make a difference? The second solution I've found here is to just run the command (modified for my system) qm set nas4free -virtio /dev/sda but that seems to require proxmox which I don't have installed and doesn't seem to be in the default repositories? Finally, once the above is sorted out and I can mount the drive directly to the VM, does anyone have an experience with whether the drive should be mounted to the VM as scsi, ide, or virtio? (I know virtio was recommend in the linked ServerFault page, but I hadn't heard of it before now since I mainly use VMWare). Thanks for your help!!!

    Read the article

  • The internal storage of a DATETIME2 value

    - by Peter Larsson
    Today I went for investigating the internal storage of DATETIME2 datatype. What I found out was that for a datetime2 value with precision 0 (seconds only), SQL Server need 6 bytes to represent the value, but stores 7 bytes. This is because SQL Server add one byte that holds the precision for the datetime2 value. Start with this very simple repro declare @now datetime2(7) = '2010-12-15 21:04:03.6934231'   select  cast(cast(@now as datetime2(0)) as binary(7)),         cast(cast(@now as datetime2(1)) as binary(7)),         cast(cast(@now as datetime2(2)) as binary(7)),         cast(cast(@now as datetime2(3)) as binary(8)),         cast(cast(@now as datetime2(4)) as binary(8)),         cast(cast(@now as datetime2(5)) as binary(9)),         cast(cast(@now as datetime2(6)) as binary(9)),         cast(cast(@now as datetime2(7)) as binary(9)) Now we are going to copy and paste these binary values and investigate which value is representing what time part. Prefix  Ticks       Ticks         Days    Days    Original value ------  ----------  ------------  ------  ------  -------------------- 0x  00  442801             75844  A8330B  734120  0x00442801A8330B 0x  01  A5920B            758437  A8330B  734120  0x01A5920BA8330B  0x  02  71BA73           7584369  A8330B  734120  0x0271BA73A8330B 0x  03  6D488504        75843693  A8330B  734120  0x036D488504A8330B 0x  04  46D4342D       758436934  A8330B  734120  0x0446D4342DA8330B 0x  05  BE4A10C401    7584369342  A8330B  734120  0x05BE4A10C401A8330B 0x  06  6FEBA2A811   75843693423  A8330B  734120  0x066FEBA2A811A8330B 0x  07  57325D96B0  758436934231  A8330B  734120  0x0757325D96B0A8330B Let us use the following color schema Red - Prefix Green - Time part Blue - Day part What you can see is that the date part is equal in all cases, which makes sense since the precision doesm't affect the datepart. What would have been fun, is datetime2(negative) just like round accepts a negative value. -1 would mean rounding to 10 second, -2 rounding to minute, -3 rounding to 10 minutes, -4 rounding to hour and finally -5 rounding to 10 hour. -5 is pretty useless, but if you extend this thinking to -6, -7 and so on, you could actually get a datetime2 value which is accurate to the month only. Well, enough ranting about this. Let's get back to the table above. If you add 75844 second to midnight, you get 21:04:04, which is exactly what you got in the select statement above. And if you look at it, it makes perfect sense that each following value is 10 times greater when the precision is increased one step too. //Peter

    Read the article

  • Solaris X86 64-bit Assembly Programming

    - by danx
    Solaris X86 64-bit Assembly Programming This is a simple example on writing, compiling, and debugging Solaris 64-bit x86 assembly language with a C program. This is also referred to as "AMD64" assembly. The term "AMD64" is used in an inclusive sense to refer to all X86 64-bit processors, whether AMD Opteron family or Intel 64 processor family. Both run Solaris x86. I'm keeping this example simple mainly to illustrate how everything comes together—compiler, assembler, linker, and debugger when using assembly language. The example I'm using here is a C program that calls an assembly language program passing a C string. The assembly language program takes the C string and calls printf() with it to print the string. AMD64 Register Usage But first let's review the use of AMD64 registers. AMD64 has several 64-bit registers, some special purpose (such as the stack pointer) and others general purpose. By convention, Solaris follows the AMD64 ABI in register usage, which is the same used by Linux, but different from Microsoft Windows in usage (such as which registers are used to pass parameters). This blog will only discuss conventions for Linux and Solaris. The following chart shows how AMD64 registers are used. The first six parameters to a function are passed through registers. If there's more than six parameters, parameter 7 and above are pushed on the stack before calling the function. The stack is also used to save temporary "stack" variables for use by a function. 64-bit Register Usage %rip Instruction Pointer points to the current instruction %rsp Stack Pointer %rbp Frame Pointer (saved stack pointer pointing to parameters on stack) %rdi Function Parameter 1 %rsi Function Parameter 2 %rdx Function Parameter 3 %rcx Function Parameter 4 %r8 Function Parameter 5 %r9 Function Parameter 6 %rax Function return value %r10, %r11 Temporary registers (need not be saved before used) %rbx, %r12, %r13, %r14, %r15 Temporary registers, but must be saved before use and restored before returning from the current function (usually with the push and pop instructions). 32-, 16-, and 8-bit registers To access the lower 32-, 16-, or 8-bits of a 64-bit register use the following: 64-bit register Least significant 32-bits Least significant 16-bits Least significant 8-bits %rax%eax%ax%al %rbx%ebx%bx%bl %rcx%ecx%cx%cl %rdx%edx%dx%dl %rsi%esi%si%sil %rdi%edi%di%axl %rbp%ebp%bp%bp %rsp%esp%sp%spl %r9%r9d%r9w%r9b %r10%r10d%r10w%r10b %r11%r11d%r11w%r11b %r12%r12d%r12w%r12b %r13%r13d%r13w%r13b %r14%r14d%r14w%r14b %r15%r15d%r15w%r15b %r16%r16d%r16w%r16b There's other registers present, such as the 64-bit %mm registers, 128-bit %xmm registers, 256-bit %ymm registers, and 512-bit %zmm registers. Except for %mm registers, these registers may not present on older AMD64 processors. Assembly Source The following is the source for a C program, helloas1.c, that calls an assembly function, hello_asm(). $ cat helloas1.c extern void hello_asm(char *s); int main(void) { hello_asm("Hello, World!"); } The assembly function called above, hello_asm(), is defined below. $ cat helloas2.s /* * helloas2.s * To build: * cc -m64 -o helloas2-cpp.s -D_ASM -E helloas2.s * cc -m64 -c -o helloas2.o helloas2-cpp.s */ #if defined(lint) || defined(__lint) /* ARGSUSED */ void hello_asm(char *s) { } #else /* lint */ #include <sys/asm_linkage.h> .extern printf ENTRY_NP(hello_asm) // Setup printf parameters on stack mov %rdi, %rsi // P2 (%rsi) is string variable lea .printf_string, %rdi // P1 (%rdi) is printf format string call printf ret SET_SIZE(hello_asm) // Read-only data .text .align 16 .type .printf_string, @object .printf_string: .ascii "The string is: %s.\n\0" #endif /* lint || __lint */ In the assembly source above, the C skeleton code under "#if defined(lint)" is optionally used for lint to check the interfaces with your C program--very useful to catch nasty interface bugs. The "asm_linkage.h" file includes some handy macros useful for assembly, such as ENTRY_NP(), used to define a program entry point, and SET_SIZE(), used to set the function size in the symbol table. The function hello_asm calls C function printf() by passing two parameters, Parameter 1 (P1) is a printf format string, and P2 is a string variable. The function begins by moving %rdi, which contains Parameter 1 (P1) passed hello_asm, to printf()'s P2, %rsi. Then it sets printf's P1, the format string, by loading the address the address of the format string in %rdi, P1. Finally it calls printf. After returning from printf, the hello_asm function returns itself. Larger, more complex assembly functions usually do more setup than the example above. If a function is returning a value, it would set %rax to the return value. Also, it's typical for a function to save the %rbp and %rsp registers of the calling function and to restore these registers before returning. %rsp contains the stack pointer and %rbp contains the frame pointer. Here is the typical function setup and return sequence for a function: ENTRY_NP(sample_assembly_function) push %rbp // save frame pointer on stack mov %rsp, %rbp // save stack pointer in frame pointer xor %rax, %r4ax // set function return value to 0. mov %rbp, %rsp // restore stack pointer pop %rbp // restore frame pointer ret // return to calling function SET_SIZE(sample_assembly_function) Compiling and Running Assembly Use the Solaris cc command to compile both C and assembly source, and to pre-process assembly source. You can also use GNU gcc instead of cc to compile, if you prefer. The "-m64" option tells the compiler to compile in 64-bit address mode (instead of 32-bit). $ cc -m64 -o helloas2-cpp.s -D_ASM -E helloas2.s $ cc -m64 -c -o helloas2.o helloas2-cpp.s $ cc -m64 -c helloas1.c $ cc -m64 -o hello-asm helloas1.o helloas2.o $ file hello-asm helloas1.o helloas2.o hello-asm: ELF 64-bit LSB executable AMD64 Version 1 [SSE FXSR FPU], dynamically linked, not stripped helloas1.o: ELF 64-bit LSB relocatable AMD64 Version 1 helloas2.o: ELF 64-bit LSB relocatable AMD64 Version 1 $ hello-asm The string is: Hello, World!. Debugging Assembly with MDB MDB is the Solaris system debugger. It can also be used to debug user programs, including assembly and C. The following example runs the above program, hello-asm, under control of the debugger. In the example below I load the program, set a breakpoint at the assembly function hello_asm, display the registers and the first parameter, step through the assembly function, and continue execution. $ mdb hello-asm # Start the debugger > hello_asm:b # Set a breakpoint > ::run # Run the program under the debugger mdb: stop at hello_asm mdb: target stopped at: hello_asm: movq %rdi,%rsi > $C # display function stack ffff80ffbffff6e0 hello_asm() ffff80ffbffff6f0 0x400adc() > $r # display registers %rax = 0x0000000000000000 %r8 = 0x0000000000000000 %rbx = 0xffff80ffbf7f8e70 %r9 = 0x0000000000000000 %rcx = 0x0000000000000000 %r10 = 0x0000000000000000 %rdx = 0xffff80ffbffff718 %r11 = 0xffff80ffbf537db8 %rsi = 0xffff80ffbffff708 %r12 = 0x0000000000000000 %rdi = 0x0000000000400cf8 %r13 = 0x0000000000000000 %r14 = 0x0000000000000000 %r15 = 0x0000000000000000 %cs = 0x0053 %fs = 0x0000 %gs = 0x0000 %ds = 0x0000 %es = 0x0000 %ss = 0x004b %rip = 0x0000000000400c70 hello_asm %rbp = 0xffff80ffbffff6e0 %rsp = 0xffff80ffbffff6c8 %rflags = 0x00000282 id=0 vip=0 vif=0 ac=0 vm=0 rf=0 nt=0 iopl=0x0 status=<of,df,IF,tf,SF,zf,af,pf,cf> %gsbase = 0x0000000000000000 %fsbase = 0xffff80ffbf782a40 %trapno = 0x3 %err = 0x0 > ::dis # disassemble the current instructions hello_asm: movq %rdi,%rsi hello_asm+3: leaq 0x400c90,%rdi hello_asm+0xb: call -0x220 <PLT:printf> hello_asm+0x10: ret 0x400c81: nop 0x400c85: nop 0x400c88: nop 0x400c8c: nop 0x400c90: pushq %rsp 0x400c91: pushq $0x74732065 0x400c96: jb +0x69 <0x400d01> > 0x0000000000400cf8/S # %rdi contains Parameter 1 0x400cf8: Hello, World! > [ # Step and execute 1 instruction mdb: target stopped at: hello_asm+3: leaq 0x400c90,%rdi > [ mdb: target stopped at: hello_asm+0xb: call -0x220 <PLT:printf> > [ The string is: Hello, World!. mdb: target stopped at: hello_asm+0x10: ret > [ mdb: target stopped at: main+0x19: movl $0x0,-0x4(%rbp) > :c # continue program execution mdb: target has terminated > $q # quit the MDB debugger $ In the example above, at the start of function hello_asm(), I display the stack contents with "$C", display the registers contents with "$r", then disassemble the current function with "::dis". The first function parameter, which is a C string, is passed by reference with the string address in %rdi (see the register usage chart above). The address is 0x400cf8, so I print the value of the string with the "/S" MDB command: "0x0000000000400cf8/S". I can also print the contents at an address in several other formats. Here's a few popular formats. For more, see the mdb(1) man page for details. address/S C string address/C ASCII character (1 byte) address/E unsigned decimal (8 bytes) address/U unsigned decimal (4 bytes) address/D signed decimal (4 bytes) address/J hexadecimal (8 bytes) address/X hexadecimal (4 bytes) address/B hexadecimal (1 bytes) address/K pointer in hexadecimal (4 or 8 bytes) address/I disassembled instruction Finally, I step through each machine instruction with the "[" command, which steps over functions. If I wanted to enter a function, I would use the "]" command. Then I continue program execution with ":c", which continues until the program terminates. MDB Basic Cheat Sheet Here's a brief cheat sheet of some of the more common MDB commands useful for assembly debugging. There's an entire set of macros and more powerful commands, especially some for debugging the Solaris kernel, but that's beyond the scope of this example. $C Display function stack with pointers $c Display function stack $e Display external function names $v Display non-zero variables and registers $r Display registers ::fpregs Display floating point (or "media" registers). Includes %st, %xmm, and %ymm registers. ::status Display program status ::run Run the program (followed by optional command line parameters) $q Quit the debugger address:b Set a breakpoint address:d Delete a breakpoint $b Display breakpoints :c Continue program execution after a breakpoint [ Step 1 instruction, but step over function calls ] Step 1 instruction address::dis Disassemble instructions at an address ::events Display events Further Information "Assembly Language Techniques for Oracle Solaris on x86 Platforms" by Paul Lowik (2004). Good tutorial on Solaris x86 optimization with assembly. The Solaris Operating System on x86 Platforms An excellent, detailed tutorial on X86 architecture, with Solaris specifics. By an ex-Sun employee, Frank Hofmann (2005). "AMD64 ABI Features", Solaris 64-bit Developer's Guide contains rules on data types and register usage for Intel 64/AMD64-class processors. (available at docs.oracle.com) Solaris X86 Assembly Language Reference Manual (available at docs.oracle.com) SPARC Assembly Language Reference Manual (available at docs.oracle.com) System V Application Binary Interface (2003) defines the AMD64 ABI for UNIX-class operating systems, including Solaris, Linux, and BSD. Google for it—the original website is gone. cc(1), gcc(1), and mdb(1) man pages.

    Read the article

  • Writing a Book, and Moving my Blog

    - by Ben Nevarez
    I started blogging about SQL Server here at SQLblog back in July, 2009 and it was a lot of fun, I enjoyed it a lot. Then later, after a series of blog posts about the Query Optimizer, I was invited to write an entire book about that same topic. But after a few months I realized that it was going to be hard to continue both blogging and writing chapters for a book, this in addition to my regular day job, so I decided to stop blogging for a little while.   Now that I have finished the last chapter of the book and I am working on the final chapter reviews, I decided to start blogging again. This time I am moving my blog to   http://www.benjaminnevarez.com   Same as my previous posts I plan to write about my topics of interest, like the relational engine, and basically anything related to SQL Server. Hopefully you find my new blog interesting and useful.   Finally, I would like to thank Adam for allowing me to blog here. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Cross Platform Data Access with Xamarin & C# For iPhone, iPad, and Android - Local, Web Services, & Sql Server

    - by Wallym
    The following is a link to cross platform data access training with Xamarin & C#.   It is intended for use on iPhone, iPad, and Android devices.  The course covers local data in Sqlite, calling Web Services via REST and JSON, and calling Sql Server. Url: http://www.learnnowonline.com/course/cpx2/xamarin-cross-platform-data-access/  Course Data  Applications live on data. These applications can vary from an online social network service, to a company’s internal database, to simple data, and all points in between. This Course will focus on how to easily access data on the device, communicate back and forth with a web service, and then finally to a SQL server database. Outline Local Data (27:36) Introduction (00:36) Problem (01:57) Solution (02:01) LINQ (02:03) LINQ Status (00:48) SQLite (02:18) SQLite - .Net Developers (00:50) SQLite-net (01:07) SQLite-net Attributes (02:10) Getting Started (01:09) CRUD (01:05) SQLite Platforms (01:17) Demo: SQLite – Android (04:53) Demo: SQLite – iOS (04:56) Summary (00:20) Web Services Data (32:43) Introduction (00:19) Async Commands (03:15) HttpClient (01:26) HTTP Verbs (01:29) Notes (00:58) GET Operation (01:37) JSON.NET (01:50) Images (01:16) Other Http Verbs (01:27) Post (03:18) Demo: Http – iOS prt1 (05:26) Demo: Http – iOS prt2 (05:28) Demo: Http – Android (04:20) Summary (00:27) Direct Data (12:33) Introduction (00:23) Remote Data - Direct (02:47) Sql Server (01:15) Demo: Sql Server – iOS (04:15) Demo: Sql Server – Android (01:49) "codepage 1252 not supported" (01:03) Other Resources (00:43) Summary (00:15) Note: Thanks to Frank Kreuger for his data access library Sqlite-Net.  It is very helpful and I have used it in some other projects beyond just this training session.

    Read the article

  • make-like build tools for data?

    - by miku
    Make is a standard tools for building software. But make decides whether a target needs to be regenerated by comparing file modification times. Are there any proven, preferably small tools that handle builds not for software but for data? Something that regenerates targets not only on mod times but on certain other properties (e.g. completeness). (Or alternatively some paper that describes such a tool.) As illustration: I'd like to automate the following process: get data (e.g. a tarball) from some regularly updated source copy somewhere if it's not there (based e.g. on some filename-scheme) convert the files to different format (but only if there aren't successfully converted ones there - e.g. from a previous attempt - custom comparison routine) for each file find a certain data element and fetch some additional file from say an URL, but only if that hasn't been downloaded yet (decide on existence of file and file "freshness") finally compute something (e.g. word count for something identifiable and store it in the database, but only if the DB does not have an entry for that exact ID yet) Observations: there are different stages each stage is usually simple to compute or implement in isolation each stage may be simple, but the data volume may be large each stage may produce a few errors each stage may have different signals, on when (re)processing is needed Requirements: builds should be interruptable and idempotent (== robust) when interrupted, already processed objects should be reused to speedup the next run data paths should be easy to adjust (simple syntax, nothing new to learn, internal dsl would be ok) some form of dependency graph, that describes the process would be nice for later visualizations should leverage existing programs, if possible I've done some research on make alternatives like rake and have worked a lot with ant and maven in the past. All these tools naturally focus on code and software build, not on data builds. A system we have in place now for a task similar to the above is pretty much just shell scripts, which are compact (and are a ok glue for a variety of other programs written in other languages), so I wonder if worse is better?

    Read the article

  • Android Design - Service vs Thread for Networking

    - by Nevyn
    I am writing an Android app, finally (yay me) and for this app I need persistant, but user closeable, network sockets (yes, more than one). I decided to try my hand at writing my own version of an IRC Client. My design issue however, is I'm not sure how to run the Socket connectivity itself. If I put the sockets at the Activity level, they keeps getting closed shortly after the Activity becomes non-visible (also a problem that needs solving...but I think i figured that one out)...but if I run a "connectivity service", I need to find out if I can have multiple instances of it running (the service, that is...one per server/socket). Either that or a I need a way to Thread the sockets themselves and have multiple threads running that I can still communicate with directly (ID system of some sort). Thus the question: Is it a 'better', or at least more "proper" design pattern, to put the Socket and networking in a service, and have the Activities consume said service...or should I tie the sockets directly to some Threaded Process owned by the UI Activity and not bother with the service implementation at all? I do know better than to put the networking directly on the UI thread, but that's as far as I've managed to get.

    Read the article

  • links for 2010-04-29

    - by Bob Rhubart
    AS11 Oracle B2B Sync Support - Series 1 (Oracle Fusion Middleware - B2B Team Blog) Sinkarbabu Kirubanithi with part 1 of a planned 3-part series on synchronous message support in Oracle B2B 11g. (tags: oracle otn fusionmiddleware b2b) Java 2 Go!: How to write a simple yet “bullet-proof” object cache "So, while we were thinking hard to come up with the most efficient, generic and elegant way of finally implementing our weak and soft caches, Mr. Eric Chan, who is one of the main architects in Oracle Beehive team, had a very interesting breakthrough. In short terms, he thought of a very nice way of combining both WeakReference and SoftReference in our weak and soft caches so that they would provide exactly the same functionality without having to deal with those reference queues at all. Basically, instead of using a plain HashMap as our backing storage, we used a java.util.WeakHashMap in both our cache implementations. The hat trick was what and how to store things in it." - Eduardo Rodrigues (tags: oracle java sun) @jamet123: First Look – Oracle Data Mining "[Oracle Data Mining] is a nice product for Oracle database customers and well worth looking into. The new UI will only make it more so." James Taylor (tags: oracle otn datamining database) Live Webcast: Social BPM: Integrating Enterprise 2.0 with Business Applications #oracle Peggy Chen and Dan Tortorici show you how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. Wednesday, May 12, 2010 at 11:00am PT / 2:00pm ET (tags: oracle otn enterprise2.0 webcast)

    Read the article

  • TFS Shipping Cadence

    - by Tarun Arora
    Brian Harry has formally announced a change to the TFS shipping cadence from the traditional 2-3 years to production cycle to a more agile and refreshing minimum once in a 3 weeks cycle! The change didn’t happen over night, it was a gradual process which was greatly influenced by moving TFS to the cloud. The thinking started with trying to figure out what the team wanted.  Like people often do, the team started with what they knew and tried to evolve from there.  The team spent a few months thinking through “What if we do major releases every year and minor releases every 6 months?”,  “Major releases every 6 months, patches once a month?”, “What if we do quarterly releases – can we get the release cycle going that fast?”, etc.  The team also spent time debating what constitutes a major release VS a minor release. How much churn are customers willing to tolerate?  The team finally concluded… “When a change this big is necessary – forget where you are and just ask where you want to be and then ask what it would take to get there.” Going forward you will see, Team Foundation Service updates every 3 weeks Visual Studio Client updates quarterly (Limited to VS 2012 for now) Team Foundation Server updates more frequent than every 2 years but details still being worked out.  The team will definitely deliver one this fall. Refer to the complete blog post from Brian Harry here.

    Read the article

  • Dutch ACEs SOA Partner Community Award Celebration

    - by JuergenKress
    When you win you need to celebrate. This was the line of thinking when I found out that I was part of a group that won the Oracle SOA Community Country Award. Well – thinking about a party is one thing, preparing it and finally having the small party is something completely different. It starts with finding a date that would be suitable for the majority of invited people. As you can imagine the SOA ACEs and ACE Directors have a busy life, that takes them places. Alongside that they are engaged with customers who want to squeeze every bit of knowledge out of them. So everybody is pretty busy (that’s what makes you an ACE). After some deliberation (and checks of international Oracle events, Trip-it, blogs and tweets) a date was chosen. Meeting on a Friday evening for some drinks is probably not a Dutch-only activity. But as some of the ACEs are self-employed they miss the companies around them to organize such events. Come the day a turn-out of almost 50% was great – although I expected some more folks . This was mainly due to some illness and work overload. Luckily the mini-party got going, (alcoholic) beverages were consumed, food was appreciated, a decent picture was made (see below) and all had a good chat and hopefully a good time. (Above from left to right: Eric Elzinga, Andreas Chatziantoniou, Mike van Aalst, Edwin Biemond) All in all a nice evening and certainly a "meeting" which can be repeated.  For the full article please visit Andreas's blog Want to organize a local SOA & BPM community? Let us know we are more than happy to support you! To receive more information become a member of the SOA & BPM Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: Eric Elzinga,Andreas Chatziantoniou,Mike van Aalst,Edwin Biemond,Dutsch SOA Community,SOA Community,Oracle,OPN,Jürgen Kress,ACE

    Read the article

  • Mapping between 4+1 architectural view model & UML

    - by Sadeq Dousti
    I'm a bit confused about how the 4+1 architectural view model maps to UML. Wikipedia gives the following mapping: Logical view: Class diagram, Communication diagram, Sequence diagram. Development view: Component diagram, Package diagram Process view: Activity diagram Physical view: Deployment diagram Scenarios: Use-case diagram The paper Role of UML Sequence Diagram Constructs in Object Lifecycle Concept gives the following mapping: Logical view (class diagram (CD), object diagram (OD), sequence diagram (SD), collaboration diagram (COD), state chart diagram (SCD), activity diagram (AD)) Development view (package diagram, component diagram), Process view (use case diagram, CD, OD, SD, COD, SCD, AD), Physical view (deployment diagram), and Use case view (use case diagram, OD, SD, COD, SCD, AD) which combines the four mentioned above. The web page UML 4+1 View Materials presents the following mapping: Finally, the white paper Applying 4+1 View Architecture with UML 2 gives yet another mapping: Logical view class diagrams, object diagrams, state charts, and composite structures Process view sequence diagrams, communication diagrams, activity diagrams, timing diagrams, interaction overview diagrams Development view component diagrams Physical view deployment diagram Use case view use case diagram, activity diagrams I'm sure further search will reveal other mappings as well. While various people usually have different perspectives, I don't see why this is the case here. Specially, each UML diagram describes the system from a particular aspect. So, for instance, why the "sequence diagram" is considered as describing the "logical view" of the system by one author, while another author considers it as describing the "process view"? Could you please help me clarify the confusion?

    Read the article

  • Nautilus can't start due to segmentation fault

    - by Dmitriy Sukharev
    Out of the blue I can't start nautilus today. When I try to open any directory it tries to open it, and sometimes I even can see the content of directory, but finally it's closed, after that there are no icons on desktop. When I tried to launch nautilus from terminal, I got: $ nautilus . Initializing nautilus-dropbox 0.7.1 Initializing nautilus-gdu extension Segmentation fault (core dumped) I've tried to move ~/.local/share/gvfs-metadata folder, I don't have nautilus-open-terminal package and don't have file /usr/local/lib/libgtk-3.so.0 Also I can't update system right now. All the time I'm getting the the same hash-sum error: $ sudo apt-get update [sudo] password for dmitriy: Ign http://mirror.mirohost.net precise InRelease Ign http://mirror.mirohost.net precise-updates InRelease Ign http://mirror.mirohost.net precise-security InRelease Hit http://mirror.mirohost.net precise Release.gpg ... Ign http://ppa.launchpad.net precise/main Translation-en Hit http://mirror.mirohost.net precise-security/restricted Translation-en Hit http://mirror.mirohost.net precise-security/universe Translation-en Fetched 1 B in 1s (0 B/s) W: Failed to fetch gzip:/var/lib/apt/lists/partial/mirror.mirohost.net_ubuntu_dists_precise_universe_source_Sources Hash Sum mismatch E: Some index files failed to download. They have been ignored, or old ones used instead. Any ideas how to rescue my system? UPD: In syslog I have the following errors: Jul 7 21:35:02 dmitriy-desktop kernel: [ 58.059141] nautilus[1991]: segfault at 7fc09d9bb700 ip 00007fc0abb5feb6 sp 00007fff6caa4cf8 error 4 in libc-2.15.so[7fc0aba24000+1b3000] Jul 7 21:35:39 dmitriy-desktop kernel: [ 94.356490] update-notifier[3358]: segfault at 7f6507611700 ip 00007f64cc221eb6 sp 00007fffbcc0dd88 error 4 in libc-2.15.so[7f64cc0e6000+1b3000] Jul 7 21:37:45 dmitriy-desktop kernel: [ 220.501859] nautilus[3629]: segfault at 7f9b9744c700 ip 00007f9b7c9c6eb6 sp 00007fff72e990f8 error 4 in libc-2.15.so[7f9b7c88b000+1b3000] UPD2: Ubuntu version is 12.04.

    Read the article

  • License Requirements for Including Dual-Licensed Open-Source Software

    - by Rick Roth
    How do you opt into one software license and not the other when the distributor gives the consumer more than one choice? For example I would like to use the DataTables JavaScript library in my web application. According to their web site, "DataTables is dual licensed under the GPL v2 license or a BSD (3-point) license." Furthermore, the source code of the JavaScript library has this text that calls out both licenses: /** * @summary DataTables * @description Paginate, search and sort HTML tables * @version 1.9.4 * @file jquery.dataTables.js * @author Allan Jardine (www.sprymedia.co.uk) * @contact www.sprymedia.co.uk/contact * * @copyright Copyright 2008-2012 Allan Jardine, all rights reserved. * * This source file is free software, under either the GPL v2 license or a * BSD style license, available at: * http://datatables.net/license_gpl2 * http://datatables.net/license_bsd * * This source file is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY * or FITNESS FOR A PARTICULAR PURPOSE. See the license files for details. * * For details please refer to: http://www.datatables.net */ Finally, the web pages with the licensing text (e.g. the DataTables BSD license page) has this statement: "DataTables is made available under both the GPL v2 license and a BSD (3-point) style license. You can select which one you wish to use the DataTables code under." My specific question is "how do you select which one you want to use." In my case, I want to only use the BSD license and I want to make it explicitly clear that I do not opt into the GPL v2 license in any way. How do you do that and have it hold up to legal challenge?

    Read the article

  • From Sea to Shining Fusion HCM Specialization

    - by Kristin Rose
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Well, the polls have closed, the votes are in and Oracle Fusion HCM Specialization is finally here! Not only is this Specialization easily achievable, partners are already seeing the “economic” value in it. But don’t just take our word for it, watch below as Oracle Diamond Partner, Infosys, shares their experience with Oracle Fusion HCM and all the success they’ve already seen! Here is how you can make a change and get started today: STEP 1: Join OPN STEP 2: Join Knowledge Zone STEP 3: Check Business and Competency Criteria STEP 4: Track Competency Status STEP 5: Apply Now So let’s put our differences aside, put Oracle Fusion first, and come together by learning more about this Oracle Fusion HCM Specialization.  We are OPN and we approve this message, The OPN Communications Team

    Read the article

  • Third-party open-source projects in .NET and Ruby and NIH syndrome

    - by Anton Gogolev
    The title might seem to be inflammatory, but it's here to catch your eye after all. I'm a professional .NET developer, but I try to follow other platforms as well. With Ruby being all hyped up (mostly due to Rails, I guess) I cannot help but compare the situation in open-source projects in Ruby and .NET. What I personally find interesting is that .NET developers are for the most part severely suffering from the NIH syndrome and are very hesitant to use someone else's code in pretty much any shape or form. Comparing it with Ruby, I see a striking difference. Folks out there have gems literally for every little piece of functionality imaginable. New projects are popping out left and right and generally are heartily welcomed. On the .NET side we have CodePlex which I personally find to be a place where abandoned projects grow old and eventually get abandoned. Now, there certainly are several well-known and maintained projects, but the number of those pales in comparison with that of Ruby. Granted, NIH on the .NET devs part comes mostly from the fact that there are very few quality .NET projects out there, let alone projects that solve their specific needs, but even if there is such a project, it's often frowned upon and is reinvented in-house. So my question is multi-fold: Do you find my observations anywhere near being correct? If so, what are your thoughts on quality and quantitiy of OSS projects in .NET? Again, if you do agree with my thoughts on "NIH in .NET", what do you think is causing it? And finally, is it Ruby's feature set & community standpoint (dynamic language, strong focus on testing) that allows for such easy integration of third-party code?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >