Search Results

Search found 21659 results on 867 pages for 'welcome always'.

Page 367/867 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • Updated Security Baseline (7u45) impacts Java 7u40 and before with High Security settings

    - by costlow
    The Java Security Baseline has been increased from 7u25 to 7u45.  For versions of Java below 7u45, this means unsigned Java applets or Java applets that depend on Javascript LiveConnect calls will be blocked when using the High Security setting in the Java Control Panel. This issue only affects Applets and Web Start applications. It does not affect other types of Java applications. The Short Answer Users upgrading to Java 7 update 45 will automatically fix this and is strongly recommended. The More Detailed Answer There are two items involved as described on the deployment flowchart: The Security Baseline – a dynamically updated attribute that checks to see which Java version contains the most recent security patches. The Security Slider – the user-controlled setting of when to prompt/run/block applets. The Security Baseline Java clients periodically check in to understand what version contains the most recent security patches. Versions are released in-between that contain bug fixes. For example: 7u25 (July 2013) was the previous secure baseline. 7u40 contained bug fixes. Because this did not contain security patches, users were not required to upgrade and were welcome to remain on 7u25. When 7u45 was released (October, 2013), this critical patch update contained security patches and raised the secure baseline. Users are required to upgrade from earlier versions. For users that are not regularly connected to the internet, there is a built in Expiration Date. Because of the pre-established quarterly critical patch updates, we are able to determine an approximate date of the next version. A critical patch released in July will have its successor released, at latest, in July + 3 months: October. The Security Slider The security slider is located within the Java control panel and determines which Applets & Web Start applications will prompt, which will run, and which will be blocked. One of the questions used to determine prompt/run/block is, “At or Above the Security Baseline.” The Combination JavaScript calls made from LiveConnect do not reside within signed JAR files, so they are considered to be unsigned code. This is correct within networked systems even if the domain uses HTTPS because signed JAR files represent signed "data at rest" whereas TLS (often called SSL) literally stands for "Transport Level Security" and secures the communication channel, not the contents/code within the channel. The resulting flow of users who click "update later" is: Is the browser plug-in registered and allowed to run? Yes. Does a rule exist for this RIA? No rules apply. Does the RIA have a valid signature? Yes and not revoked. Which security prompt is needed? JRE is below the baseline. This is because 7u45 is the baseline and the user, clicked "upgrade later." Under the default High setting, Unsigned code is set to "Don’t Run" so users see: Additional Notes End Users can control their own security slider within the control panel. System Administrators can customize the security slider during automated installations. As a reminder, in the future, Java 7u51 (January 2014) will block unsigned and self-signed Applets & Web Start applications by default.

    Read the article

  • SQLAuthority News – MSDN Flash Mentions – TechNet Flash Mention – Top Community Contributors (Annual

    - by pinaldave
    I was going over my email to reach the famous Inbox(0), I found TechNet Flash and MSDN Flash email. I had kept them because those email editions had mentioned me in the same. I quickly took the screenshot for the same. I am posting them here to refer them back again. It is always good idea to store important information for revisiting memory lane. As a recent update, Microsoft has awarded me Top Community Contributors (Annual) Winners. I want to express that I would have not done without your valuable contribution. I want to dedicate the award to all of you, as without your presence I would have not given this prestigious award. I could have not done this with myself only. I had complete support of Jacob Sebastian (SQL Server MVP) for all of the community related activity. Here are few images. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

  • What code smell best describes this code?

    - by Paul Stovell
    Suppose you have this code in a class: private DataContext _context; public Customer[] GetCustomers() { GetContext(); return _context.Customers.ToArray(); } public Order[] GetOrders() { GetContext(); return _context.Customers.ToArray(); } // For the sake of this example, a new DataContext is *required* // for every public method call private void GetContext() { if (_context != null) { _context.Dispose(); } _context = new DataContext(); } This code isn't thread-safe - if two calls to GetOrders/GetCustomers are made at the same time from different threads, they may end up using the same context, or the context could be disposed while being used. Even if this bug didn't exist, however, it still "smells" like bad code. A much better design would be for GetContext to always return a new instance of DataContext and to get rid of the private field, and to dispose of the instance when done. Changing from an inappropriate private field to a local variable feels like a better solution. I've looked over the code smell lists and can't find one that describes this. In the past I've thought of it as temporal coupling, but the Wikipedia description suggests that's not the term: Temporal coupling When two actions are bundled together into one module just because they happen to occur at the same time. This page discusses temporal coupling, but the example is the public API of a class, while my question is about the internal design. Does this smell have a name? Or is it simply "buggy code"?

    Read the article

  • Update on ASP.NET MVC 3 RC2 (and a workaround for a bug in it)

    - by ScottGu
    Last week we published the RC2 build of ASP.NET MVC 3.  I blogged a bunch of details about it here. One of the reasons we publish release candidates is to help find those last “hard to find” bugs. So far we haven’t seen many issues reported with the RC2 release (which is good) - although we have seen a few reports of a metadata caching bug that manifests itself in at least two scenarios: Nullable parameters in action methods have problems: When you have a controller action method with a nullable parameter (like int? – or a complex type that has a nullable sub-property), the nullable parameter might always end up being null - even when the request contains a valid value for the parameter. [AllowHtml] doesn’t allow HTML in model binding: When you decorate a model property with an [AllowHtml] attribute (to turn off HTML injection protection), the model binding still fails when HTML content is posted to it. Both of these issues are caused by an over-eager caching optimization we introduced very late in the RC2 milestone.  This issue will be fixed for the final ASP.NET MVC 3 release.  Below is a workaround step you can implement to fix it today. Workaround You Can Use Today You can fix the above issues with the current ASP.NT MVC 3 RC2 release by adding one line of code to the Application_Start() event handler within the Global.asax class of your application: The above code sets the ModelMetaDataProviders.Current property to use the DataAnnotationsModelMetadataProvider.  This causes ASP.NET MVC 3 to use a meta-data provider implementation that doesn’t have the more aggressive caching logic we introduced late in the RC2 release, and prevents the caching issues that cause the above issues to occur.  You don’t need to change any other code within your application.  Once you make this change the above issues are fixed.  You won’t need to have this line of code within your applications once the final ASP.NET MVC 3 release ships (although keeping it in also won’t cause any problems). Hope this helps – and please keep any reports of issues coming our way, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Do abstractions have to reduce code readability?

    - by Martin Blore
    A good developer I work with told me recently about some difficulty he had in implementing a feature in some code we had inherited; he said the problem was that the code was difficult to follow. From that, I looked deeper into the product and realised how difficult it was to see the code path. It used so many interfaces and abstract layers, that trying to understand where things began and ended was quite difficult. It got me thinking about the times I had looked at past projects (before I was so aware of clean code principles) and found it extremely difficult to get around in the project, mainly because my code navigation tools would always land me at an interface. It would take a lot of extra effort to find the concrete implementation or where something was wired up in some plugin type architecture. I know some developers strictly turn down dependency injection containers for this very reason. It confuses the path of the software so much that the difficulty of code navigation is exponentially increased. My question is: when a framework or pattern introduces so much overhead like this, is it worth it? Is it a symptom of a poorly implemented pattern? I guess a developer should look to the bigger picture of what that abstractions brings to the project to help them get through the frustration. Usually though, it's difficult to make them see that big picture. I know I've failed to sell the needs of IOC and DI with TDD. For those developers, use of those tools just cramps code readability far too much.

    Read the article

  • PetaPoco with parameterised stored procedure and Asp.Net MVC

    - by Jalpesh P. Vadgama
    I have been playing with Micro ORMs as this is very interesting things that are happening in developer communities and I already liked the concept of it. It’s tiny easy to use and can do performance tweaks. PetaPoco is also one of them I have written few blog post about this. In this blog post I have explained How we can use the PetaPoco with stored procedure which are having parameters.  I am going to use same Customer table which I have used in my previous posts. For those who have not read my previous post following is the link for that. Get started with ASP.NET MVC and PetaPoco PetaPoco with stored procedures Now our customer table is ready. So let’s Create a simple process which will fetch a single customer via CustomerId. Following is a code for that. CREATE PROCEDURE mysp_GetCustomer @CustomerId as INT AS SELECT * FROM [dbo].Customer where CustomerId=@CustomerId Now  we are ready with our stored procedures. Now lets create code in CustomerDB class to retrieve single customer like following. using System.Collections.Generic; namespace CodeSimplified.Models { public class CustomerDB { public IEnumerable<Customer> GetCustomers() { var databaseContext = new PetaPoco.Database("MyConnectionString"); databaseContext.EnableAutoSelect = false; return databaseContext.Query<Customer>("exec mysp_GetCustomers"); } public Customer GetCustomer(int customerId) { var databaseContext = new PetaPoco.Database("MyConnectionString"); databaseContext.EnableAutoSelect = false; var customer= databaseContext.SingleOrDefault<Customer>("exec mysp_GetCustomer @customerId",new {customerId}); return customer; } } } Here in above code you can see that I have created a new method call GetCustomer which is having customerId as parameter and then I have written to code to use stored procedure which we have created to fetch customer Information. Here I have set EnableAutoSelect=false because I don’t want to create Select statement automatically I want to use my stored procedure for that. Now Our Customer DB class is ready and now lets create a ActionResult Detail in our controller like following using System.Web.Mvc; namespace CodeSimplified.Controllers { public class HomeController : Controller { public ActionResult Index() { ViewBag.Message = "Welcome to ASP.NET MVC!"; return View(); } public ActionResult About() { return View(); } public ActionResult Customer() { var customerDb = new Models.CustomerDB(); return View(customerDb.GetCustomers()); } public ActionResult Details(int id) { var customerDb = new Models.CustomerDB(); return View(customerDb.GetCustomer(id)); } } } Now Let’s create view based on that ActionResult Details method like following. Now everything is ready let’s test it in browser. So lets first goto customer list like following. Now I am clicking on details for first customer and Let’s see how we can use the stored procedure with parameter to fetch the customer details and below is the output. So that’s it. It’s very easy. Hope you liked it. Stay tuned for more..Happy Programming

    Read the article

  • Terminal proxy or screen without terminal emulation

    - by ZyX
    How can I make terminal applications immune to terminal emulator close, but still able to use all virtual terminal features? I see this must be something like screen, but without VT100 terminal emulation, something which will just apply whatever application does with "terminal proxy"'s terminal (like outputting something to stdout/stderr or using stty to set terminal options) to the terminal this proxy runs in. // I know about screen and altscreen on, but it makes either this (screen with TERM=screen): or this (screen with TERM=rxvt-unicode): while I want this (rxvt-unicode without screen): I have figured out that everything looks fine if I compile rxvt-unicode with USE=-xterm-color (in fact vim looks like on the second picture even without screen if I add this USE flag) and set TERM=screen-256color, but I do not like this workaround because it actually changes colors and I can't be sure that it will always change them only this way:

    Read the article

  • SQL SERVER – 2 T-SQL Puzzles and Win USD 50 worth Amazon Gift Card and 25 Other Prizes

    - by pinaldave
    We all love brain teasers and interesting puzzles. Today I decided to come up with 2 interesting puzzles and winner of the contest will get USD 50 worth Amazon Gift Card. The puzzles are sponsored by NuoDB. Additionally, The first 25 individuals who download NuoDB Beta 8 by midnight Friday, Sept. 21 (EST) will automatically receive a $10 Amazon gift card. Puzzle 1: Why following code when executed in SSMS displays result as a * (Star)? SELECT CAST(634 AS VARCHAR(2)) Puzzle 2: Write the shortest code that produces results as 1 without using any numbers in the select statement. Bonus Q: How many different Operating System (OS) NuoDB support? Click here HINT If you can solve above puzzles you will be eligible for winning USD 50 Amazon Gift Card. However, you can always enroll yourself for following Bonus Prizes where if you have good chance of winning USD 10 Amazon Gift Card (if you are first 25 individual in specific time). Bonus Prizes: The first 25 individuals who download NuoDB Beta 8 by midnight Friday, Sept. 21 (EST) will automatically receive a $10 Amazon gift card. Rules: Please leave an answer in the comments section below. You can resubmit your answer multiple times, the latest entry will be considered valid. The winner will be announced on 1st October. Last day to participate in the puzzle is September 28th, 2012. All valid answer will be kept hidden till September 28th, 2012. Only One Winner will get USD 50 worth Amazon Gift Card. The first 25 individuals who download NuoDB Beta 8 by midnight Friday, Sept. 21 (EST) will automatically receive a $10 Amazon gift card. The winner will be selected using random algorithm. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: NuoDB

    Read the article

  • ipad video format

    - by Mike
    When you use iTunes to sync your videos with the iPhone the videos are always saved with no more than 640 pixels wide, if I am not wrong. What about the iPad? What is the size of videos iTunes syncs with iPad? 1024x768? and what if the video has a dimension below 1024x768? Will it scale up? or will it keep the video at low res and scale when you play? thanks.

    Read the article

  • Data Quality and Master Data Management Resources

    - by Dejan Sarka
    Many companies or organizations do regular data cleansing. When you cleanse the data, the data quality goes up to some higher level. The data quality level is determined by the amount of work invested in the cleansing. As time passes, the data quality deteriorates, and you need to repeat the cleansing process. If you spend an equal amount of effort as you did with the previous cleansing, you can expect the same level of data quality as you had after the previous cleansing. And then the data quality deteriorates over time again, and the cleansing process starts over and over again. The idea of Data Quality Services is to mitigate the cleansing process. While the amount of time you need to spend on cleansing decreases, you will achieve higher and higher levels of data quality. While cleansing, you learn what types of errors to expect, discover error patterns, find domains of correct values, etc. You don’t throw away this knowledge. You store it and use it to find and correct the same issues automatically during your next cleansing process. The following figure shows this graphically. The idea of master data management, which you can perform with Master Data Services (MDS), is to prevent data quality from deteriorating. Once you reach a particular quality level, the MDS application—together with the defined policies, people, and master data management processes—allow you to maintain this level permanently. This idea is shown in the following picture. OK, now you know what DQS and MDS are about. You can imagine the importance on maintaining the data quality. Here are some resources that help you preparing and executing the data quality (DQ) and master data management (MDM) activities. Books Dejan Sarka and Davide Mauri: Data Quality and Master Data Management with Microsoft SQL Server 2008 R2 – a general introduction to MDM, MDS, and data profiling. Matching explained in depth. Dejan Sarka, Matija Lah and Grega Jerkic: MCTS Self-Paced Training Kit (Exam 70-463): Building Data Warehouses with Microsoft SQL Server 2012 – I wrote quite a few chapters about DQ and MDM, and introduced also SQL Server 2012 DQS. Thomas Redman: Data Quality: The Field Guide – you should start with this book. Thomas Redman is the father of DQ and MDM. Tyler Graham: Microsoft SQL Server 2012 Master Data Services – MDS in depth from a product team mate. Arkady Maydanchik: Data Quality Assessment – data profiling in depth. Tamraparni Dasu, Theodore Johnson: Exploratory Data Mining and Data Cleaning – advanced data profiling with data mining. Forthcoming presentations I am presenting a DQS and MDM seminar at PASS SQL Rally Amsterdam 2013: Wednesday, November 6th, 2013: Enterprise Information Management with SQL Server 2012 – a good kick start to your first DQ and / or MDM project. Courses Data Quality and Master Data Management with SQL Server 2012 – I wrote a 2-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. Start improving the quality of your data now!

    Read the article

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • Best way to implement user-powered data validation

    - by vegetables
    I run a product recommendation engine and I'm hitting a few snags. I'm looking to see if anyone has any recommendations on what I should do to minimize these issues. Here's how the site works: Users come to the site and are presented with product recommendations based on some criteria. If a user knows of a product that is not in our system, they can add it by providing the product name and manufacturer. We take that information, and: Hit one API to gather all the product meta-data (and to validate the product spelling, etc). If the product is not in this first API, we do not allow it in our system. Use the information from step 1 to hit another API for pricing information (gathered from many places online). For the sake of discussion, assume that I am searching both APIs in the most efficient/successful manner possible. For the most part, this works very well. I'd say ~80% of our data is perfectly accurate, but there are a few issues: Sometimes the pricing API (Step 2) doesn't have any information for the product. The way the pricing API is built, it will always return something (theoretically, the closest possible match), and there's no guarantee that the product name is spelled exactly the same way in both APIs, so there's no automated way of knowing if it's the right product. When the pricing API finds the right product, occasionally it has outdated, or even invalid pricing data (e.g. if it screen-scraped the wrong price from a website). Since the site was fairly small at first, I was able to manually verify every product that was added to the website. However, the site has grown to the point where this is taking several hours per day, and is just not efficient use of my time. So, my question is: Aside from hiring someone (or getting an intern) to validate all the data manually, what would be the best system of letting my userbase self-manage the data. Specifically, how can I allow users to edit the data while minimizing the risk of someone ambushing my website, or accidentally setting the data incorrectly.

    Read the article

  • Podcast Show Notes: Evolving Enterprise Architecture

    - by Bob Rhubart
    Back in March Oracle ACE Directors Mike van Alst (IT-Eye) and Jordan Braunstein (Visual Integrator Consulting) and Oracle product manager Jeff Davies participated in an ArchBeat virtual meet-up. The resulting conversation quickly turned to the changing nature of enterprise architecture and the various forces driving that change. All four parts of that wide-ranging conversation are now available. Listen to Part 1 Listen to Part 2 Listen to Part 3 Listen to Part 4 As you’ll hear, Mike, Jordan, and Jeff bring unique perspectives and opinions to this very lively conversation. These are three very sharp, very experienced guys, as and you might expect, they don’t always walk in lock-step when it comes to EA. You can learn more about Mike, Jordan, and Jeff – and share your opinions with them -- through the links below: Mike van Alst Blog | Twitter | LinkedIn | Business |Oracle Mix | Oracle ACE Profile Jordan Braunstein Blog | Twitter | LinkedIn | Business | Oracle Mix | Oracle ACE Profile Jeff Davies Homepage | Blog | LinkedIn | Oracle Mix (Also check out Jeff’s book: The Definitive Guide to SOA: Oracle Service Bus) Up Next Next week’s program features highlights from the panel discussion at the Oracle Technology Architect Day event held in Anaheim, CA on May 19. You’ll hear from Oracle ACE Directors Basheer Khan and Floyd Teter, Oracle virtualization expert and former Sun Microsystems principal engineer Jeff Savit, Oracle security analyst Geri Born, and event MC Ralf Dossman, Director of SOA and Middleware in Oracle’s Enterprise Solutions Group. Stay tuned: RSS

    Read the article

  • Wifi connection issues for Android with Linksys WRT54G

    - by Paul
    I have had this Linksys WRT54G for many years now, and it has always worked without a hassle. But for some reason I am unable to connect my brand new Android phone (Magic). The wireless network is broadcast and simply secured using WPA PSK TKIP. My phone sees the network (with excellent signal strength) and correctly asks me for the WPA password. Then when it tries to connect it stays for a while on "Retrieving address..." (liberately translated from Dutch) and finally fails reading "Remember, secured with WPA". Does anybody know how to solve this issue? edit: the android phone is genuine (not modified/rooted or whatsoever).

    Read the article

  • Need to open links in new tab in ie8 !

    - by BHOdevelopper
    I have a sidebar (right sided iframe) and when i click on a link in it, it opens a new window in IE8, (in firefox it open a new tab). What do i need to do to open links in IE8 in a new tab. I already set the Tools-Internet Options-Settings- 'Always open pop-ups in a new tab' and 'A new tab in the current window' open in new tab but still doesn't work. My links are pretty simple, what am i missing ? exemple: text. Also some site are saying to register Regsvr32 actxprxy.dll to fix this problem, still doesn't work. And i want this to work with a simple click, no 'right-click-open in new tab' option. I also hope i won't get the 'can't change how ie8 works' answer. ;)

    Read the article

  • Tool to convert blogger.com content to dasBlog

    - by Daniel Moth
    Due to blogger.com dropping FTP support, I've had to move my blog. If you are in a similar situation, this post will help you by showing you the necessary steps to take. Goals No loss on blog posts, comments AND all existing permalinks continue to work (redirect to the correct place). Steps Download the XML files corresponding to your blogger.com content and store them in a folder. Install and configure dasBlog on your local machine. Configure your web.config file (will need updating once you run step 4). Use the tool I describe further down to generate the content and place it at the right place. Test your site locally. Once you are happy, repeat step 2 on your hosting provider of choice. Remember to copy up your dasBlog theme folder if you created one. Copy up the local web.config file and the XML dasBlog content files generated by the tool of step 4. Test your site on the server. Once you are happy, go live (following instructions from your hoster). In my case, I gave the nameservers from my new hoster to my existing domain registrar and they made the switch. Tool (code) At step 4 above I referred to a tool. That is an overstatement, it is simply one 450-line C#code file that you can download here: BloggerToDasBlog.cs. I used this from a .NET 2.0 console app (and I run it under the Visual Studio debugger, i.e. F5) like this: Program.cs. The console app referenced the dasBlog 2.3 ASP.NET Blogging Engine i.e. the newtelligence.DasBlog.Runtime.dll assembly. Let me describe what the code does: Input: A path to a folder where the XML files from the old blogger.com blog reside. It can deal with both types of XML file. A full file path to a file where it creates XML redirect input (as required by the rewriteMap mentioned here). The blog URL. The author's email. The blog author name. A path to an empty folder where the new XML dasBlog content files will get created. The subfolder name used after the domain name in the URL. The 3 reg ex patterns to use. You can use the same as mine, but will need to tweak the monthly_archive rule. Again, to see what values I passed for all the above, see my Program.cs file. Output: It creates dasBlog XML files in the folder specified. It creates those by parsing the old blogger.com XML files that reside in the folder specified. After that is generated, copy it to the "Content" folder under your dasBlog installation. It creates an XML file with a single ignorable root element and a bunch of inner XML elements. You can copy paste these in the web.config file as discussed in this post. Other notes: For each blog post, it detects outgoing links to itself (i.e. to the same blog), and rewrites those to point to the new URLs. So internal links do not rely on the web.config redirects. It deals with duplicate post titles; it does not deal with triplicates and higher. Removes all references to blogger.com (e.g. references to [email protected], the injected hidden footer for statistics that each blog post has and others – see the code). It creates a lot of diagnostic output (in the Output window) and indeed the documentation for the code is in the Debug.WriteLine statements ;) This is not code I will maintain or support – it was a throwaway one-use project that I am sharing here as a starting point for anyone finding themselves in the same boat that I was. Enjoy "as is". Comments about this post welcome at the original blog.

    Read the article

  • How do I get GNU screen not to start in my home directory in OS X?

    - by Benjamin Oakes
    GNU Screen (screen) behaves differently on OS X 10.5 (Leopard) and 10.6 (Snow Leopard) compared to Linux (at least Ubuntu, Red Hat, and Gentoo) and OS X 10.4 (Tiger). In 10.5 and 10.6, new screens (made with screen or ^A c) always places me in my home directory ~. In Linux and OS X Tiger, new screens have a pwd of wherever the screen was created originally. Made up examples to illustrate what I mean: Tiger: $ cd ~/foo $ pwd /Users/ben/foo $ screen $ pwd /Users/ben/foo $ screen # or ^A c $ pwd /Users/ben/foo Leopard, Snow Leopard: $ cd ~/foo $ pwd /Users/ben/foo $ screen $ pwd /Users/ben $ screen # or ^A c $ pwd /Users/ben How do I get Leopard and Snow Leopard to behave like Tiger used to?

    Read the article

  • FluentPath: a fluent wrapper around System.IO

    - by Bertrand Le Roy
    .NET is now more than eight years old, and some of its APIs got old with more grace than others. System.IO in particular has always been a little awkward. It’s mostly static method calls (Path.*, Directory.*, etc.) and some stateful classes (DirectoryInfo, FileInfo). In these APIs, paths are plain strings. Since .NET v1, lots of good things happened to C#: lambda expressions, extension methods, optional parameters to name just a few. Outside of .NET, other interesting things happened as well. For example, you might have heard about this JavaScript library that had some success introducing a fluent API to handle the hierarchical structure of the HTML DOM. You know? jQuery. Knowing all that, every time I need to use the stuff in System.IO, I cringe. So I thought I’d just build a more modern wrapper around it. I used a fluent API based on an essentially immutable Path type and an enumeration of such path objects. To achieve the fluent style, a healthy dose of lambda expressions is being used to act on the objects. Without further ado, here’s an example of what you can do with the new API. In that example, I’m using a Media Center extension that wants all video files to be in their own folder. For that, I need a small tool that creates directories for each video file and moves the files in there. Here’s the code for it: Path.Get(args[0]) .Select(p => p.Extension == ".avi" || p.Extension == ".m4v" || p.Extension == ".wmv" || p.Extension == ".mp4" || p.Extension == ".dvr-ms" || p.Extension == ".mpg" || p.Extension == ".mkv") .CreateDirectory(p => p.Parent .Combine(p.FileNameWithoutExtension)) .Previous() .Move(p => p.Parent .Combine(p.FileNameWithoutExtension) .Combine(p.FileName)); This code creates a Path object pointing at the path pointed to by the first command line argument of my executable. It then selects all video files. After that, it creates directories that have the same names as each of the files, but without their extension. The result of that operation is the set of created directories. We can now get back to the previous set using the Previous method, and finally we can move each of the files in the set to the corresponding freshly created directory, whose name is the combination of the parent directory and the filename without extension. The new fluent path library covers a fair part of what’s in System.IO in a single, convenient API. Check it out, I hope you’ll enjoy it. Suggestions are more than welcome. For example, should I make this its own project on CodePlex or is this informal style just OK? Anything missing that you’d like to see? Is there a specific example you’d like to see expressed with the new API? Bugs? The code can be downloaded from here (this is under a new BSD license): http://weblogs.asp.net/blogs/bleroy/Samples/FluentPath.zip

    Read the article

  • Firewall behind firewall

    - by Makach
    I've recently changed jobs and I've been set up with a new workstation. On all previous places where I've been working they've had some sort of local firewall installed on each and every workstation - but here I've been told not to activate it because it is not necessary since we're already behind a HW Firewall. To me this seem a bit naïve, but I cannot emphasise it. I always thought a local firewall was good practice, ie. if something managed to come through the hw firewall there might be a slight chance other computers on the lan would block the internal threath. We got free access to internet and we got a virus checker installed.

    Read the article

  • Lotus Notes 8.5.3 "Unable To Invoke Program"

    - by Ross Doughty
    Just got a little issue with some of my users today, when trying to open an email attachment in Lotus Notes, they get the error Unable To Invoke Program. Now, they can always just save the attachment and then open it through windows explorer, which is fine, but I would rather get a proper solution for this. Knowing almost nothing about Lotus Notes does not help me I know but I think it may have something to do with the default file association, however my colleague tells me that Notes uses the windows file associations, which are working. Any ideas anyone?

    Read the article

  • Network Scanning: HP Laserjet 2840 & Windows 7 (+ Linux?)

    - by Eddie Parker
    I own a HP Laserjet 2840 multifunction printer. It's a fabulous device, but on Windows 7 it can't network scan. HP doesn't seem to support it in anything greater than Vista, and they aren't forth coming with solutions. I'm curious if anyone else has gotten work arounds to this problem. One possible avenue for me is that I have an always-on Gentoo server that is configured to see the printer as a fax and printer using hplip. I have this shared via samba and I can print just fine through samba. Is there a similar way I can do network scanning? Anyone know of where I start looking for this?

    Read the article

  • So long and thanks for the fish&hellip;

    - by Geoff N. Hiten
    This marks my last post as a SQLPASS Board member.  I learned a lot during my year of service and I thank everyone involved for this opportunity.  I would especially like to thank the Chapter leaders and Regional Mentors for Virtual Chapters who (mostly) patiently taught me about Virtual Chapters.   I hope the changes I put in place will help strengthen and grow VCs and PASS going forward.  I would also like to thank every one who encouraged me to reach beyond my comfort zone and accept a leadership position within the PASS organization.  My overall principle was to be a good steward of the PASS community.  Could I have done more?  Always. Did I do enough?  I hope so.  But PASS is a volunteer organization and my time, like yours, is limited.  I have other obligations in life that supersede PASS.  Now I have more time for some of those.  I won’t be going away or leaving the SQL Community.  I will still contribute to the community and support PASS, just in a different role.  Time to let somebody else enjoy the hot seat for a while. Finally, everyone who voted (not just for me) deserves a thanks.  More voters and more engaged voters, strong candidates, and a vigorous debate were all I wanted out of declaring as a candidate last year. This year the SQL community got exactly that. Thank you..

    Read the article

  • Resources for the &ldquo;What&rsquo;s New in VS 2013&rdquo; Presentation

    - by John Alexander
    Originally posted on: http://geekswithblogs.net/jalexander/archive/2013/10/24/resources-for-the-ldquowhatrsquos-new-in-vs-2013rdquo-presentation.aspxThanks for attending the “What’s New in Visual Studio 2013 (and TFS too) presentation. As promised, here are some links! Note: if you didn’t attend, its ok. This is for you, too. The bits themselves.  This article introduces new and enhanced features in Visual Studio 2013 Visual Studio Virtual Launch – Lots of Videos here and and then on November 13th, live sessions and a q and a session… What features map to what Visual Studio editions Visual Studio 2013 New Editor Features Visual Studio 2013 Application Lifecycle Management Virtual Machine and Hands-on-Labs / Demo Scripts from Brian Keller More on CodeLens from Zain Naboulsi  What are Web Essentials? You can now download Web Essentials for Visual Studio 2013 RTM. A great overview on TFS 2013 from Brian Harry The release archive lists updates made to Team Foundation Service along with which version of Team Foundation Server the updates are a part of. REST API for Team Rooms  “What's new in Visual Studio for Web Developers and Front End Devs” screencasts – quick, easy and painless from the always awesome Scott Hanselman Introducing ASP.NET Identity –-- A membership system for ASP.NET applications Visual Studio 2013 Adds New Project Templates with Improvements and Social Accounts Authentication

    Read the article

  • Allen for Umbraco with location EXIF meta data

    - by Vizioz Limited
    The latest version of Allen for Umbraco has now hit the Apple App store, we have managed to add some nice improvements to this version that include:Storing location and direction information when photos are taken within the AppEmbedding EXIF data into the images when they are uploadBackground UploadingPull to refresh the media tree Location and DirectionBy default when the camera is used within an application the location and direction that the camera is pointing is not stored within the image meta data. We have now added full support so that this data is now added. We have added a setting which allows you to prevent this data from being uploaded to your website if you do not want the location data to be sent you can turn it off within Allen, Note: Please don't forget that location services do need to be turned on to allow the app to access the images in the phone's asset library.We have had quite a few ideas from users already for using this location data, including logging free parking in Denmark to geo-tagging holiday photos and linking the photos to Google street view. Embedding EXIF dataWe now embed all the meta data available on the iPhone into the image when it is uploaded to your server, this allows you to pull the data out and use it within your site. Have a look at Cultiv's Photo Meta Data package for great example code that allows you to automatically pull this data out and populate properties on your Umbraco media item.We slightly modified the source code of this package to allow the package to always extract the image data, as the default package requires a property to allow the data to be extracted, it's an easy change, if you get stuck add a comment to this post. Background UploadingIf you try to upload multiple images and need to start doing something else on your phone, you can now click the home button and the application will continue to upload your images in the background. As soon as it has finished you will receive a standard Apple notification. Pull to RefreshOur final enhancement has been to add "Pull to refresh" to the media trees, just pull the tree downwards with your finger and it will refresh, this is useful if you are adding items to your media tree while testing your site with Allen for Umbraco. Future enhancements.. your ideas?If you have any ideas for future enhancement feel free to add a comment below!

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >