Search Results

Search found 5035 results on 202 pages for 'exchange 2013'.

Page 85/202 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Which persistent & lightweight queue messaging for cross domain (> 2) data exchange with rails integ

    - by Erwan
    Hi all, I'm looking for the right messaging system for my needs. Can you help me ? For now, there won't be a huge amount of data to process, but I don't want to be limited later ... The machines are not just web servers, so the messaging tool should be lightweight, even if processing is not very speed. When some data change on a server, all servers should have the information and process it locally. (should I create one channel per server on each of them ?) The frontend is written on Rails, so it is important, in order to simplify the development, that there is a gem / plugin to manage communications and data sent. At this time : RabbitMQ + workling seems to fit my needs. Could this be a right choice ? ActiveMQ make me afraid, because of Java (I really don't know very well Java, but it seems to me to be big CPU consumer) Others don't seem to be as mature as them. There might be lot of development using this kind of technology, so I can't go to the wrong way ! Thank you for help.

    Read the article

  • multiple calender in exchange web service

    - by user3559462
    i have multiple calender in my mailbox, i can retrieve only one calender that is main calnder folder using ews api 2.0, now i want whole list of calenders and appointments and meetings in that. like i have three calender one is main calnder 1.Calender(color-code:default) 2.Jorgen(color-code:pink) 3.Soren(color-code: yellow) i can retrieve all the values of main "Calnder", using the below code Folder inbox = Folder.Bind(service, WellKnownFolderName.Calendar); view.PropertySet = new PropertySet(BasePropertySet.IdOnly); // This results in a FindItem call to EWS. FindItemsResults<Item> results = inbox.FindItems(view); i = 1; m = results.TotalCount; if (results.Count() > 0) { foreach (var item in results) { PropertySet props = new PropertySet(AppointmentSchema.MimeContent,AppointmentSchema.ParentFolderId,AppointmentSchema.Id,AppointmentSchema.Categories,AppointmentSchema.Location); // This results in a GetItem call to EWS. var email = Appointment.Bind(service, item.Id, props); string iCalFileName = @"C:\export\appointment" +i ".ics"; // Save as .eml. using (FileStream fs = new FileStream(iCalFileName, FileMode.Create, FileAccess.Write)) { fs.Write(email.MimeContent.Content, 0, email.MimeContent.Content.Length); } i++; } now i want to get all the remaining calender schedules also, i am not able to get is Please help, need it urgently

    Read the article

  • Open currency exchange rate API thingy

    - by n00b
    The table is: currency_name exchange_rate USD 1.000000 EUR 1.194929 CAD 0.942142 etc. What I want is to make a simple little cron job Python script to run every couple hours and update these values in the database. Are there any open APIs? I mean I am like 99% sure Yahoo! or Google finance has something like this but cannot find. Maybe someone here has done this?

    Read the article

  • Extracting Mail from Microsoft Exchange server 2007 through IMAPS in java

    - by abhishekgem84
    props.put("mail.debug", "true"); props.setProperty("mail.store.protocol","imaps"); props.setProperty("mail.imaps.auth.plain.disable","false"); props.setProperty("mail.imaps.host","Mail3.connect.com"); props.setProperty("mail.imaps.port","135"); props.setProperty("mail.imaps.user","test"); props.setProperty("mail.imaps.pwd","123"); props.setProperty("mail.imaps.ssl.protocols","SSL"); props.setProperty("mail.imaps.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); props.setProperty("mail.imaps.socketFactory.fallback", "false"); props.setProperty("mail.imaps.socketFactory.port", "135"); i have done all this but it still says javax.mail.AuthenticationFailedException: failed to connect, no password specified? kindly help me out thanks

    Read the article

  • Fast exchange of data between unmanaged code and managed code

    - by vizcaynot
    Hello: Without using p/invoke, from a C++/CLI I have succeeded in integrating various methods of a DLL library from a third party built in C. One of these methods retrieves information from a database and stores it in different structures. The C++/CLI program I wrote reads those structures and stores them in a List<, which is then returned to the corresponding reading and use of an application programmed completely in C#. I understand that the double handling of data (first, filling in several structures and then, filling all of these structures into a list<) may generate an unnecessary overload, at which point I wish C++/CLI had the keyword "yield". Depending on the above scenario, do you have recommendations to avoid or reduce this overload? Thanks.

    Read the article

  • C# reading Emails from MS Exchange

    - by Mick W
    Hi I am new to c# and have been asked to read in the title and contents of emails that arrive in a particular email account and them store them in SQL. I initially thought this must be easy but I cannot find any simple tutorials or samples. Can anyone help? thanks

    Read the article

  • Android programming: Authentication and data exchange with Java EE

    - by Konsumierer
    I am having a Java application running in a Tomcat server using Spring, Hibernate, etc. and a two web interfaces, one implemented in Tapestry 5 and the other one using Flex with BlazeDS and Spring-BlazeDS. In my first android application I would now like to log in to the server and retrieve some data. I´m wondering how I could achieve this in a secure way. First of all I need to know which technology is the best to retrieve data from the server and how can I restrict the access to users only that have been successfully authenticated. With what I read until now I would try to implement a HTTPServlet on the server and make server calls via HTTP Client. In the servlet I could probably use the HTTPSession to check if the request comes from an authenticated user. And the data I would try to send serialized (JSON). Unfortunately, I´ve never done those things and maybe I´m on the wrong way and there are more comfortable solutions.

    Read the article

  • PHP object exchange between servers

    - by bensiu
    I got script what read from database and manipulate it so on the end I got $result array... on one server is it possible to serialize this object and pass it to other script so this $result array could be available for other script on second server... I got on first server: return serialize ( $results ); and on second: $data = unserialize ( file_get_contents ( 'http://www.......com/reader.php' ) ); ...but there is no communication between .... What I am doing wrong ? Bensiu

    Read the article

  • how to exchange variables between extern iframe and site

    - by helle
    hey guys, for developement reasons (working with facebook-connect) i put the connect iframe in an iframe. on that way i am able to work on the connect-thing independent of my ip and don't have to develop on the live-server. the iframe holding the connect-button iframe is on my server, accessing the same db-server as the developer version (developer version is running on localhsot). as far as good ... BUT how can i let the parent site know, that the user has connected, so that i get his profile-picture displayd as reaction to this? how can i react in generally on an action/event/JS in an iframe? is there a way? can the iframe post data to the parent site? like a time-stamp and fb_userid? if the iframe stuff doesn't work ... i thougt of saving the ip to the fb_userid (to db) and check matches ... but i don't like this idea.

    Read the article

  • Looking for work with startups in exchange for Equity [closed]

    - by SteveG
    Hi guys, I'm looking for a startup to get involved with for an equity stake. I'm a software enginner/architect with 25 years experience. I've got extensive experience with Java, C# and LAMP and have worked with serveral startups over the last couple of years. If you're interested but would like to know more about me I can send you my resume. Steve

    Read the article

  • How do I increase maximum attachment size in Exchange 2007 SP1?

    - by AspNyc
    I've been looking all over for a relatively simple answer to a fairly straightforward question: "how do I increase the maximum size of attachments that can be sent and/or received in Exchange 2007?". But I have yet to find a solution that works. We have a pretty straightforward setup: Exchange 2007 SP1 running on a single server, with the OWA role delegated to a second server. We did a clean install of Exchange 2007 a year or two ago: we did not upgrade from a previous version. I forget if we installed RTM and then patched it to SP1, or if we installed with SP1 already baked in. I just thought I'd mention those items, in case they influence the answer. So far, I've tried running the following Powershell commands on the main Exchange server and verified that they've taken effect: Set-TransportConfig -MaxReceiveSize 40MB Set-ReceiveConnector "RcvConnector" -MaxMessageSize 40MB Set-MaxReceiveSize "MailboxName" -MaxReceiveSize 40MB As of right now, though, the specified mailbox is still rejecting messages over 10MB. You get brownie points if you can also tell me how to set the default mailbox attachment size limits, so that new accounts don't have default Set-MaxReceiveSize values of "unlimited" they currently do. Any advice or suggestions would be greatly appreciated. Tx in advance!

    Read the article

  • What I&rsquo;m working on for this blog&hellip;

    - by marc dekeyser
    Yes it has gone quiet again for the time being! As I am in training for Exchange 2013 and have the need to keep some customers happy (well, we all have to do something to earn our keep ;)) time to write blog posts or even work on my little side projects is limited. So for the time being there are no new blog posts coming but I’d like to tell you that you can expect posts on the following topics: * Automating lab server deployments (Using WDS and MDT 2012 RU1) * Scripts to automate application installations (and integration with the above) * Exchange 2013 posts * Exchange 2013 automation scripts (since I’m already seeing where I could do something here :P) As always, I’m still taking requests…

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • The future for Microsoft

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/16/the-future-for-microsoft.aspxMicrosoft is in the process of reinventing itself. While some may argue that it’s “too little, too late” or that their growing consumer-focused strategy is wrong, the truth of the situation is that Microsoft is reinventing itself into a new company. While Microsoft is now calling themselves a “devices and services” company, that’s not entirely accurate. Let’s look at some facts: Microsoft will always (for the long-term foreseeable future) be financially split into the following divisions: Windows/Operating Systems, which for FY13 made up approximately 24% of overall revenue. Server and Tools, which for FY13 made up approximately 26% of overall revenue. Enterprise/Business Products, which for FY13 made up approximately 32% of overall revenue. Entertainment and Devices, which for FY13 made up approximately 13% of overall revenue. Online Services, which for FY13 made up approximately 4% of overall revenue. It is important to realize that hardware products like the Surface fall under the Windows/Operating Systems division while products like the Xbox 360 fall under the Entertainment and Devices division. (Presumably other hardware, such as mice, keyboards, and cameras, also fall under the Entertainment and Devices division.) It’s also unclear where Microsoft’s recent acquisition of Nokia’s handset division will fall, but let’s assume that it will be under Entertainment and Devices as well. Now, for the sake of argument, let’s assume a slightly different structure that I think is more in line with how Microsoft presents itself and how the general public sees it: Consumer Products and Devices, which would probably make up approximately 9% of overall revenue. Developer Tools, which would probably make up approximately 13% of overall revenue. Enterprise Products and Devices, which would probably make up approximately 47% of overall revenue. Entertainment, which would probably make up approximately 13% of overall revenue. Online Services, which would probably make up approximately 17% of overall revenue. (Just so we’re clear, in this structure hardware products like the Surface, a portion of Windows sales, and other hardware fall under the Consumer Products and Devices division. I’m assuming that more of the income for the Windows division is coming from enterprise/volume licenses so 15% of that income went to the Enterprise Products and Devices division. Most of the enterprise services, like Azure, fall under the Online Services division so half of the Server and Tools income went there as well.) No matter how you look at it, the bulk of Microsoft’s income still comes from not just the enterprise but also software sales, and this really shouldn’t surprise anyone. So, now that the stage is set…what’s the future for Microsoft? The future I see for Microsoft (again, this is just my prediction based on my own instinct, gut-feel and publicly available information) is this: Microsoft is becoming a consumer-focused enterprise company. Let’s look at it a different way. Microsoft is an enterprise-focused company trying to create a larger consumer presence.  To a large extent, this is the exact opposite of Apple, who is really a consumer-focused company trying to create a larger enterprise presence. The major reason consumer-focused companies (like Apple) have started making in-roads into the enterprise is the “bring your own device” phenomenon. Yes, Apple has created some “game-changing” products but their enterprise influence is still relatively small. Unfortunately (for this blog post at least), Apple provides revenue in terms of hardware products rather than business divisions, so it’s not possible to do a direct comparison. However, in the interest of transparency, from Apple’s Quarterly Report (filed 24 July 2013), their revenue breakdown is: iPhone, which for the 3 months ending 29 June 2013 made up approximately 51% of revenue. iPad, which for the 3 months ending 29 June 2013 made up approximately 18% of revenue. Mac, which for the 3 months ending 29 June 2013 made up approximately 14% of revenue. iPod, which for the 3 months ending 29 June 2013 made up approximately 2% of revenue. iTunes, Software, and Services, which for the 3 months ending 29 June 2013 made up approximately 11% of revenue. Accessories, which for the 3 months ending 29 July 2013 made up approximately 3% of revenue. From this, it’s pretty clear that Apple is a consumer-and-hardware-focused company. At this point, you may be asking yourself “Where is all of this going?” The answer to that lies in Microsoft’s shift in company focus. They are becoming more consumer focused, but what exactly does that mean? The biggest change (at least that’s been in the news lately) is the pending purchase of Nokia’s handset division. This, in combination with their Surface line of tablets and the Xbox, will put Microsoft squarely in the realm of a hardware-focused company in addition to being a software-focused company. That can (and most likely will) shift the revenue split to looking at revenue based on software sales (both consumer and enterprise) and also hardware sales (mostly on the consumer side). If we look at things strictly from a Windows perspective, Microsoft clearly has a lot of irons in the fire at the moment. Discounting the various product SKUs available and painting the picture with broader strokes, there are currently 5 different Windows-based operating systems: Windows Phone Windows Phone 7.x, which runs on top of the Windows CE kernel Windows Phone 8.x+, which runs on top of the Windows 8 kernel Windows RT The ARM-based version of Windows 8, which runs on top of the Windows 8 kernel Windows (Pro) The Intel-based version of Windows 8, which runs on top of the Windows 8 kernel Xbox The Xbox 360, which runs it’s own proprietary OS. The Xbox One, which runs it’s own proprietary OS, a version of Windows running on top of the Windows 8 kernel and a proprietary “manager” OS which manages the other two. Over time, Windows Phone 7.x devices will fade so that really leaves 4 different versions. Looking at Windows RT and Windows Phone 8.x paints an interesting story. Right now, all mobile phone devices run on some sort of ARM chip and that doesn’t look like it will change any time soon. That means Microsoft has two different Windows based operating systems for the ARM platform. Long term, it doesn’t make sense for Microsoft to continue supporting that arrangement. I have long suspected (since the Surface was first announced) that Microsoft will unify these two variants of Windows and recent speculation from some of the leading Microsoft watchers lends credence to this suspicion. It is rumored that upcoming Windows Phone releases will include support for larger screen sizes, relax the requirement to have a hardware-based back button and will continue to improve API parity between Windows Phone and Windows RT. At the same time, Windows RT will include support for smaller screen sizes. Since both of these operating systems are based on the same core Windows kernel, it makes sense (both from a financial and development resource perspective) for Microsoft to unify them. The user interfaces are already very similar. So similar in fact, that visually it’s difficult to tell them apart. To illustrate this, here are two screen captures: Other than a few variations (the Bing News app, the picture shown in the Pictures tile and the spacing between the tiles) these are identical. The one on the left is from my Windows 8.1 laptop (which looks the same as on my Surface RT) and the one on the right is from my Windows Phone 8 Lumia 925. This pretty clearly shows that from a consumer perspective, there really is no practical difference between how these two operating systems look and how you interact with them. For the consumer, your entertainment device (Xbox One), phone (Windows Phone) and mobile computing device (Surface [or some other vendors tablet], laptop, netbook or ultrabook) and your desktop computing device (desktop) will all look and feel the same. While many people will denounce this consistency of user experience, I think this will be a good thing in the long term, especially for the upcoming generations. For example, my 5-year old son knows how to use my tablet, phone and Xbox because they all feature nearly identical user experiences. When Windows 8 was released, Microsoft allowed a Windows Store app to be purchased once and installed on as many as 5 devices. With Windows 8.1, this limit has been increased to over 50. Why is that important? If you consider that your phone, computing devices, and entertainment device will be running the same operating system (with minor differences related to physical hardware chipset), that means that I could potentially purchase my sons favorite Angry Birds game once and be able to install it on all of the devices I own. (And for those of you wondering, it’s only 7 [at the moment].) From an app developer perspective, the story becomes even more compelling. Right now there are differences between the different operating systems, but those differences are shrinking. The user interface technology for both is XAML but there are different controls available and different user experience concepts. Some of the APIs available are the same while some are not. You can’t develop a Windows Phone app that can also run on Windows (either Windows Pro or RT). With each release of Windows Phone and Windows RT, those difference become smaller and smaller. Add to this mix the Xbox One, which will also feature a Windows-based operating system and the same “modern” (tile-based) user interface and the visible distinctions between the operating systems will become even smaller. Unifying the operating systems means one set of APIs and one code base to maintain for an app that can run on multiple devices. One code base means it’s easier to add features and fix bugs and that those changes become available on all devices at the same time. It also means a single app store, which will increase the discoverability and reach of your app and consolidate revenue and app profile management. Now, the choice of what devices an app is available on becomes a simple checkbox decision rather than a technical limitation. Ultimately, this means more apps available to consumers, which is always good for the app ecosystem. Is all of this just rumor, speculation and conjecture? Of course, but it’s not unfounded. As I mentioned earlier, some of the prominent Microsoft watchers are also reporting similar rumors. However, Microsoft itself has even hinted at this future with their recent organizational changes and by telling developers “if you want to develop for Xbox One, start developing for Windows 8 now.” I think this pretty clearly paints the following picture: Microsoft is committed to the “modern” user interface paradigm. Microsoft is changing their release cadence (for all products, not just operating systems) to be faster and more modular. Microsoft is going to continue to unify their OS platforms both from a consumer perspective and a developer perspective. While this direction will certainly concern some people it will excite many others. Microsoft’s biggest failing has always been following through with a strong and sustained marketing strategy that presents a consistent view point and highlights what this unified and connected experience looks like and how it benefits consumers and enterprises. We’ve started to see some of this over the last few years, but it needs to continue and become more aggressive and consistent. In the long run, I think Microsoft will be able to pull all of these technologies and devices together into one seamless ecosystem. It isn’t going to happen overnight, but my prediction is that we will be there by the end of 2016. As both a consumer and a developer, I, for one, am excited about the future of Microsoft.

    Read the article

  • VG.net 8.5 Released

    - by Frank Hileman
    We have released version 8.5 of the VG.net vector graphics system. This release supports Visual Studio 2013. Companies who purchased a VG.net license after October 1, 2013, are eligible for a free upgrade. We will be sending you an email. There is one cosmetic problem which wasted our time, as we could not find a work around. It occurs when your display is set to a high DPI. You can see the problem in the image of the toolbox below, which uses a DPI of 125%, on Windows 7: The ToolboxItem class accepts only Bitmaps with a size of 16x16. We tried many sizes and many bitmap formats. As you can see, this tiny Bitmap is then scaled by the toolbox, and the scaling algorithm adds artifacts. This is an "improvement" Microsoft recently added to Visual Studio 2013.

    Read the article

  • Veeam giveaway

    - by marc dekeyser
    Hey everybody! As you might have noticed an extra banned has showed up on this site from veeam. Since they have decided to sponsor this blog (thank you very much all! Would not have happened without all of you!) I’ll periodically share some news from them with all of you. They are doing a big give away if you register on there site, one of them being a surface tablet, which you all know is brilliant!   Veeam is now featuring monthly prize drawings with some very exciting prizes. Entering is easy—just one entry is all that’s required for a chance to win every month until August 2013. Among the prizes, there are Microsoft Surface tablets, Apple iPads, and FREE passes to TechEd 2013 and VMworld 2013! Find out more about Veeam’s big giveaway.

    Read the article

  • Webcast: Applications Integration Architecture

    - by LuciaC
    Webcast: Applications Integration Architecture - Overview and Best Practices Date:  November 12, 2013.Join us for an Overview and Best Practices live webcast on Applications Integration Architecture (AIA). We are covering following topics in this Webcast : AIA Overview AIA - Where it Stands Pre-Install, Pre-Upgrade Concerns Understanding Dependency Certification Matrix Documentation Information Center Demonstration - How to evaluate certified combination Software Download/Installable Demonstration - edelivery Download Overview Reference Information Q & A (15 Minutes)  We will be holding 2 separate sessions to accommodate different timezones: EMEA / APAC - timezone Session : Tuesday, 12-NOV-2013 at 09:00 UK / 10:00 CET / 14:30 India / 18:00 Japan / 20:00 AEDT Details & Registration : Doc ID 1590146.1 Direct registration link USA - timezone Session : Wednesday, 13-NOV-2013 at 18:00 UK / 19:00 CET / 10:00 PST / 11:00 MST / 13:00 EST Details & Registration : Doc ID 1590147.1 Direct registration link If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler. Remember that you can access a full listing of all future webcasts as well as replays from Doc ID 740966.1.

    Read the article

  • AutoVue 20.2.1 for Agile Released

    - by Angus Graham
    Oracle's AutoVue 20.2.1 for Agile PLM is now available on Oracle's Software Delivery Cloud.  This latest release allows Agile PLM customers to take advantage of new AutoVue 20.2.1 features in the following Agile PLM environments:  9.3.x, 9.2.2.x, 9.2.1.x. AutoVue 20.2.1 delivers improvements in the following areas: New Format Support: AutoVue 20.2 adds support for the latest versions of popular file formats including: 2D CAD: AutoCAD 2013 MCAD: Inventor 2013, AutoCAD Mechanical 2013, Unigraphics NX8, JT 9.2 through 9.5, CATIA v5-6 R2012, Creo Parametric 2.0 ECAD: Altium Designer New Platform Support:  Client and server support has been extended to the following platforms: Java 7 JVM Google Chrome Browser Oracle VM 2 virtualization environment Installer Improvements: Ensures ports used by AutoVue are not in use - if they are, the admin will be prompted to select alternate ports. Click here to access the latest AutoVue Format Support Sheet.

    Read the article

  • 301 redirects - can we not delete old pages?

    - by KBS
    First time here :) We have a page on the site which ranks well for an SEO term (top 5) but contains old information. We have added a new page but Google doesn't rank it that well. Information on these pages is time sensitive. Old: example.com/2013-related-information.html New: example.com/2014-related-information.html Obvious solution is to delete old page and do a 301 redirect to the new page. Now, can we still keep the old page by giving it a new URL. Step1: example.com/2013-related-information.html is redirect to example.com/2014-related-information.html Step2: example.com/2014-related-information.html is recreated with a new address such as example.com/new-2013-related-information.html What we are trying to do is to send the user to the fresh page but still not wasting the record copy if someone wants to go and dig up old page. Would appreciate help!! Cheers

    Read the article

  • December events for Oracle VM

    - by Chris Kawalek
    Where in the world is Oracle VM in December? Whether you are in the US, Asia or the UK, you can find us in December at any of the events below: UK Oracle User Group Conference 2012 Birmingham, United Kingdom December 1st – 5th, 2013 Check out the Oracle Virtualization Strategy and Roadmap Session on December 5 Gartner Data Center Conference 2012 Las Vegas, Nevada, USA December 3rd – 6th, 2013 Visit the Oracle Booth to learn about Oracle VM and Optimized Data Center Solutions. NetApp Insight Sheraton Macau, Macau, China December 11-13, 2013 Oracle VM & NetApp Storage Connect integrated solutions to simplify virtualized infrastructure management

    Read the article

  • BPM hands-on Workshops in UK, South Africa and Spain

    - by JuergenKress
    We offer free 3 days hands-on BPM 11cPS6 workshops for Oracle partners who want to become BPM Specialized: UK 5-7 November 2013 Oracle Reading South Africa 18-20 November 2013 Oracle Johannesburg, Spain 17-19 December 2013 Oracle Madrid For further details please visit our registration page. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM Bootcamp,BPM,education,training,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >