Search Results

Search found 22625 results on 905 pages for 'must do better'.

Page 350/905 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • Announcing ASP.NET MVC 3 (Release Candidate 2)

    - by ScottGu
    Earlier today the ASP.NET team shipped the final release candidate (RC2) for ASP.NET MVC 3.  You can download and install it here. Almost there… Today’s RC2 release is the near-final release of ASP.NET MVC 3, and is a true “release candidate” in that we are hoping to not make any more code changes with it.  We are publishing it today so that people can do final testing with it, let us know if they find any last minute “showstoppers”, and start updating their apps to use it.  We will officially ship the final ASP.NET MVC 3 “RTM” build in January. Works with both VS 2010 and VS 2010 SP1 Beta Today’s ASP.NET MVC 3 RC2 release works with both the shipping version of Visual Studio 2010 / Visual Web Developer 2010 Express, as well as the newly released VS 2010 SP1 Beta.  This means that you do not need to install VS 2010 SP1 (or the SP1 beta) in order to use ASP.NET MVC 3.  It works just fine with the shipping Visual Studio 2010.  I’ll do a blog post next week, though, about some of the nice additional feature goodies that come with VS 2010 SP1 (including IIS Express and SQL CE support within VS) which make the dev experience for both ASP.NET Web Forms and ASP.NET MVC even better. Bugs and Perf Fixes Today’s ASP.NET MVC 3 RC2 build contains many bug fixes and performance optimizations.  Our latest performance tests indicate that ASP.NET MVC 3 is now faster than ASP.NET MVC 2, and that existing ASP.NET MVC applications will experience a slight performance increase when updated to run using ASP.NET MVC 3. Final Tweaks and Fit-N-Finish In addition to bug fixes and performance optimizations, today’s RC2 build contains a number of last-minute feature tweaks and “fit-n-finish” changes for the new ASP.NET MVC 3 features.  The feedback and suggestions we’ve received during the public previews has been invaluable in guiding these final tweaks, and we really appreciate people’s support in sending this feedback our way.  Below is a short-list of some of the feature changes/tweaks made between last month’s ASP.NET MVC 3 RC release and today’s ASP.NET MVC 3 RC2 release: jQuery updates and addition of jQuery UI The default ASP.NET MVC 3 project templates have been updated to include jQuery 1.4.4 and jQuery Validation 1.7.  We are also excited to announce today that we are including jQuery UI within our default ASP.NET project templates going forward.  jQuery UI provides a powerful set of additional UI widgets and capabilities.  It will be added by default to your project’s \scripts folder when you create new ASP.NET MVC 3 projects. Improved View Scaffolding The T4 templates used for scaffolding views with the Add-View dialog now generates views that use Html.EditorFor instead of helpers such as Html.TextBoxFor. This change enables you to optionally annotate models with metadata (using data annotation attributes) to better customize the output of your UI at runtime. The Add View scaffolding also supports improved detection and usage of primary key information on models (including support for naming conventions like ID, ProductID, etc).  For example: the Add View dialog box uses this information to ensure that the primary key value is not scaffold as an editable form field, and that links between views are auto-generated correctly with primary key information. The default Edit and Create templates also now include references to the jQuery scripts needed for client validation.  Scaffold form views now support client-side validation by default (no extra steps required).  Client-side validation with ASP.NET MVC 3 is also done using an unobtrusive javascript approach – making pages fast and clean. [ControllerSessionState] –> [SessionState] ASP.NET MVC 3 adds support for session-less controllers.  With the initial RC you used a [ControllerSessionState] attribute to specify this.  We shortened this in RC2 to just be [SessionState]: Note that in addition to turning off session state, you can also set it to be read-only (which is useful for webfarm scenarios where you are reading but not updating session state on a particular request). [SkipRequestValidation] –> [AllowHtml] ASP.NET MVC includes built-in support to protect against HTML and Cross-Site Script Injection Attacks, and will throw an error by default if someone tries to post HTML content as input.  Developers need to explicitly indicate that this is allowed (and that they’ve hopefully built their app to securely support it) in order to enable it. With ASP.NET MVC 3, we are also now supporting a new attribute that you can apply to properties of models/viewmodels to indicate that HTML input is enabled, which enables much more granular protection in a DRY way.  In last month’s RC release this attribute was named [SkipRequestValidation].  With RC2 we renamed it to [AllowHtml] to make it more intuitive: Setting the above [AllowHtml] attribute on a model/viewmodel will cause ASP.NET MVC 3 to turn off HTML injection protection when model binding just that property. Html.Raw() helper method The new Razor view engine introduced with ASP.NET MVC 3 automatically HTML encodes output by default.  This helps provide an additional level of protection against HTML and Script injection attacks. With RC2 we are adding a Html.Raw() helper method that you can use to explicitly indicate that you do not want to HTML encode your output, and instead want to render the content “as-is”: ViewModel/View –> ViewBag ASP.NET MVC has (since V1) supported a ViewData[] dictionary within Controllers and Views that enables developers to pass information from a Controller to a View in a late-bound way.  This approach can be used instead of, or in combination with, a strongly-typed model class.  The below code demonstrates a common use case – where a strongly typed Product model is passed to the view in addition to two late-bound variables via the ViewData[] dictionary: With ASP.NET MVC 3 we are introducing a new API that takes advantage of the dynamic type support within .NET 4 to set/retrieve these values.  It allows you to use standard “dot” notation to specify any number of additional variables to be passed, and does not require that you create a strongly-typed class to do so.  With earlier previews of ASP.NET MVC 3 we exposed this API using a dynamic property called “ViewModel” on the Controller base class, and with a dynamic property called “View” within view templates.  A lot of people found the fact that there were two different names confusing, and several also said that using the name ViewModel was confusing in this context – since often you create strongly-typed ViewModel classes in ASP.NET MVC, and they do not use this API.  With RC2 we are exposing a dynamic property that has the same name – ViewBag – within both Controllers and Views.  It is a dynamic collection that allows you to pass additional bits of data from your controller to your view template to help generate a response.  Below is an example of how we could use it to pass a time-stamp message as well as a list of all categories to our view template: Below is an example of how our view template (which is strongly-typed to expect a Product class as its model) can use the two extra bits of information we passed in our ViewBag to generate the response.  In particular, notice how we are using the list of categories passed in the dynamic ViewBag collection to generate a dropdownlist of friendly category names to help set the CategoryID property of our Product object.  The above Controller/View combination will then generate an HTML response like below.    Output Caching Improvements ASP.NET MVC 3’s output caching system no longer requires you to specify a VaryByParam property when declaring an [OutputCache] attribute on a Controller action method.  MVC3 now automatically varies the output cached entries when you have explicit parameters on your action method – allowing you to cleanly enable output caching on actions using code like below: In addition to supporting full page output caching, ASP.NET MVC 3 also supports partial-page caching – which allows you to cache a region of output and re-use it across multiple requests or controllers.  The [OutputCache] behavior for partial-page caching was updated with RC2 so that sub-content cached entries are varied based on input parameters as opposed to the URL structure of the top-level request – which makes caching scenarios both easier and more powerful than the behavior in the previous RC. @model declaration does not add whitespace In earlier previews, the strongly-typed @model declaration at the top of a Razor view added a blank line to the rendered HTML output. This has been fixed so that the declaration does not introduce whitespace. Changed "Html.ValidationMessage" Method to Display the First Useful Error Message The behavior of the Html.ValidationMessage() helper was updated to show the first useful error message instead of simply displaying the first error. During model binding, the ModelState dictionary can be populated from multiple sources with error messages about the property, including from the model itself (if it implements IValidatableObject), from validation attributes applied to the property, and from exceptions thrown while the property is being accessed. When the Html.ValidationMessage() method displays a validation message, it now skips model-state entries that include an exception, because these are generally not intended for the end user. Instead, the method looks for the first validation message that is not associated with an exception and displays that message. If no such message is found, it defaults to a generic error message that is associated with the first exception. RemoteAttribute “Fields” -> “AdditionalFields” ASP.NET MVC 3 includes built-in remote validation support with its validation infrastructure.  This means that the client-side validation script library used by ASP.NET MVC 3 can automatically call back to controllers you expose on the server to determine whether an input element is indeed valid as the user is editing the form (allowing you to provide real-time validation updates). You can accomplish this by decorating a model/viewmodel property with a [Remote] attribute that specifies the controller/action that should be invoked to remotely validate it.  With the RC this attribute had a “Fields” property that could be used to specify additional input elements that should be sent from the client to the server to help with the validation logic.  To improve the clarity of what this property does we have renamed it to “AdditionalFields” with today’s RC2 release. ViewResult.Model and ViewResult.ViewBag Properties The ViewResult class now exposes both a “Model” and “ViewBag” property off of it.  This makes it easier to unit test Controllers that return views, and avoids you having to access the Model via the ViewResult.ViewData.Model property. Installation Notes You can download and install the ASP.NET MVC 3 RC2 build here.  It can be installed on top of the previous ASP.NET MVC 3 RC release (it should just replace the bits as part of its setup). The one component that will not be updated by the above setup (if you already have it installed) is the NuGet Package Manager.  If you already have NuGet installed, please go to the Visual Studio Extensions Manager (via the Tools –> Extensions menu option) and click on the “Updates” tab.  You should see NuGet listed there – please click the “Update” button next to it to have VS update the extension to today’s release. If you do not have NuGet installed (and did not install the ASP.NET MVC RC build), then NuGet will be installed as part of your ASP.NET MVC 3 setup, and you do not need to take any additional steps to make it work. Summary We are really close to the final ASP.NET MVC 3 release, and will deliver the final “RTM” build of it next month.  It has been only a little over 7 months since ASP.NET MVC 2 shipped, and I’m pretty amazed by the huge number of new features, improvements, and refinements that the team has been able to add with this release (Razor, Unobtrusive JavaScript, NuGet, Dependency Injection, Output Caching, and a lot, lot more).  I’ll be doing a number of blog posts over the next few weeks talking about many of them in more depth. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Running a scheduled task as SYSTEM with console window open

    - by raoulsson
    I am auto creating scheduled tasks with this line within a batch windows script: schtasks /Create /RU SYSTEM /RP SYSTEM /TN startup-task-%%i /TR %SPEEDWAY_DIR%\%TARGET_DIR%%%i\%STARTUPFILE% /SC HOURLY /MO 1 /ST 17:%%i1:00 I wanted to avoid using specific user credentials and thus decided to use SYSTEM. Now, when checking in the taskmanagers process list or, even better, directly with the C:\> schtasks command itself, all is working well, the tasks are running as intended. However in this particular case I would like to have an open console window where I can see the log flying by. I know I could use C:\> tail -f thelogfile.log if I installed e.g. cygwin (on all machines) or some proprietary tools like Baretail on Windows. But since I only switch to these machines in case of trouble, I would prefer to start the scheduled task in such a way that every user immediately sees the log. Any chance? Thanks!

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • Chroot with CentOS 5.3 + openssh 4.3p2

    - by Scud
    OS: CentOS 5.3, with openssh 4.3p2 Trying to set 'chroot' in ssh shell, but openssh version prior to 4.8 doesn't take below settings. yum update openssh open up to version 4.3 which is quite old. Doesn't CentOS support openssh 4.8 or up? If that's the case, how to set chroot with openssh 4.3? or is it better to just using FTP? My purpose is limit SFTP or FTP access to certain folder, not root folder. Thanks! Match group sftponly ChrootDirectory /home/%u X11Forwarding no AllowTcpForwarding no ForceCommand internal-sftp

    Read the article

  • MacBookPro RAM: CL7 or CL9

    - by chris_l
    Hi, I want to increase the RAM in my MacBookPro 15", 2.53 GHz, Mid-2009. I currently have 4GB, and I want 6GB (and upgrade to 8GB later - I think this should work with my model: http://forums.macrumors.com/showthread.php?t=573906) Now I see two offers: 1x4GB DDR3 1333 CL9 9-9 (149 EUR) 1x4GB DDR3 1066 CL7 7-7 (192 EUR). Both are from the same brand (Transcend), both are labeled "For MacBook/MacBookPro". Now I have no idea, which of these would work better with the 2GB RAM module from my currently installed memory (Running at 1067 MHz, according to System Profiler - but with what CL setting?) Would the 1333 CL9 RAM be set to CL7 when running at 1066 MHz? Thanks, Chris PS Should I expect problems when I later upgrade to 8GB with a different module (in case the same module won't be produced anymore)?

    Read the article

  • Alice In Wonderland: Good, but not Great

    - by Theo Moore
    We went to see Alice In Wonderland today. We both like Tim Burton a lot (the stranger the better) and like Johnny Depp very well also. After seeing all the previews and such, we were fired up to see this film. Honestly, I thought it was good but not great. I was prepared to be wow-ed, but I wasn't. Perhaps I expected too much. I did like it, but I'll not own it nor would I expect to see it again...unless someone I know decides they want to see it. I was about to say something to reassure you that I wasn't going to provide any spoilers but two things occurred to me: one, I never give spoilers and two, why worry about spoilers for a film that so closely follows a book? My comments about the film are hard to describe, but the basic gist is that it doesn't really feel like it..."works" to me. I can't get any more specific than that, much as I'd like to do so. Something about it seems sort of disjointed and not in that Alice way you'd expect. My only specific comment is that I didn't like the actor who plays Alice very well. She was very flat and just didn't sell he character to me. She seemed a bit, well, plastic. Depp was as good as you'd expect him to be, I am happy to say. Obviously Lewis Carroll couldn't have imagined this made into film, but I can't help thinking that he'd see this and say that Depp was the perfect Mad Hatter. So, I'd definitely recommend seeing it (we saw it in 3D which was cool, but not really necessary) at least once, but don't be surprised if you're kinda meh afterwards.

    Read the article

  • LAMP Stack Versioning -- Is there a website or version tracker source to help suggest the right versions of each part of a platform stack?

    - by Chris Adragna
    Taken singly, it's easy to research versions and compatibility. Version information is readily available on each single part of a platform stack, such as MySQL. You can find out the latest version, stable version, and sometimes even the percentage of people adopting it by version (personally, I like seeing numbers on adoption rates). However, when trying to find the best possible mix of versions, I have a harder time. For example, "if you're using MySQL 5.5, you'll need PHP version XX or higher." It gets even more difficult to mitigate when you throw higher level platforms into the mix such as Drupal, Joomla, etc. I do consider "wizard" like installers to be beneficial, such as the Bitnami installers. However, I always wonder if those solutions cater more to the least common denominator -- be all to many -- and as such, I think I'd be better to install things on my own. Such solutions do seem kind of slow to adopt new versions, slower than necessary, I suspect. Is there a website or tool that consolidates versioning data in order to help a webmaster choose which versions to deploy or which upgrades to install, in consideration of all the other parts of the stack?

    Read the article

  • New Trusted Status awarded to first Mobile Java Developer

    - by Jacob Lehrbaum
    Java Verified has just announced that GameLoft is the first developer to receive its new Trusted Status!  Java Verified is an industry-recognized Java testing and signing program backed and funded by companies such as AT&T, LG, Motorola, Nokia, Oracle, Orange, Samsung and Vodafone, and chartered with making it easier for mobile developers to certify and deploy applications for use across the billions of mobile handsets that run the Java ME.  Because of its breadth and diversity, Java ME provides an unmatched opportunity to reach more than 3 billions consumers, but at the same time, developers are faced with the challenge of working with multiple distribution channels and a range of handsets. To this end, the Java Verified program provides a suite of tests that help to validate identity, functionality, integrity, and quality.  Since its rebirth in 2010 as an independent organization, the Java Verified program has been actively working to make it even easier to create and distribute Java ME apps.  Example initiatives include updates to the Unified Testing Criteria to make it easier to test "Simple Apps," community outreach to better understand and address developer pain-points  and a new "Trusted Status."  In the words of the Java Verified Program, Trusted Status is:a privileged status to be granted to developers who will have proven that the quality of their Java ME apps is of a consistently high standard. These are developers who will have earned the trust of Java Verified by demonstrating unfailingly that testing to the UTC standard is a crucial part of their product development activityThe first developer to be awarded this status is GameLoft.  By achieving Trusted Status Gameloft can now test their applications to the Java Verified standard without needing to provide Java Verified with the evidence.  The apps then automatically get signed with the Java Verified signature enabling GameLoft to benefit from reduced costs and time-to-market for their new Java ME applications from here on out.  Learn more about the exciting news or apply now for Trusted Status!

    Read the article

  • Visual Studio LightSwitch: Yes, these are the droids you&rsquo;re looking for

    - by Jim Duffy
    With all the news and focus on the new features coming in Silverlight 5 I thought I’d take a few minutes to remind folks about the work that Microsoft has done on LightSwitch since the applications created by LightSwitch are Silverlight applications. LightSwitch makes it easier for non-coders to build business applications and easier for coders to maintain them. For those not familiar with LightSwitch, it is a new tool that provides a easier and quicker way for coder and non-coder types alike to create line-of-business applications for the desktop, the web, and the cloud. The target audience for this tool are those power-user types who create Access applications for their organization. While those Access applications fill an immediate need, they typically aren’t very scalable, extendable and/or maintainable by the development staff of the organization. LightSwitch creates applications based on technologies built into Visual Studio thus making it easier for corporate developers to extend and maintain them. LightSwitch is currently in beta but it will ultimately become a new addition to the Visual Studio line of products. Go ahead and download the beta to get a better idea of what the product can do for your organization. The LightSwitch Developer Center contains links to download the beta links to instructional videos links to tutorials links to the LightSwitch Training Kit Another quality resource for LightSwitch information is the Visual Studio LightSwitch Team Blog. My good friend Beth Massi is on the LightSwitch team and has additional valuable content on her blog. Have a day.

    Read the article

  • Check if root ca certificate is installed

    - by Zulakis
    We are having a custom CA for our local-domains. The Root CA certificate is installed on all the corporate machines by default, but sometimes it happens that we have someone here who doesn't have it installed. If the user a) accesses our intranet using http or b) accepts the server-certificate I would like to redirect the user to a site which tells it what happened and how they can install the root CA. The only solution I found was the following: <img src="https://the_site/the_image" onerror="redirectToCertPage()"> This is barely a work-around and not really a solution. It can be triggered by other problems then the missing certificate. Are there any better solutions on how to solve this problem?

    Read the article

  • Recommended SpamAssassin update channels?

    - by Timo Geusch
    I'm currently using SpamAssassin on a couple of mail servers that I look after. SpamAssassin runs in the context of amavisd-new on those servers and with the usual bunch of plugins (FuzzyOCR, DCC, pyzor, razor). Currently the servers are getting their rule updates from the default SpamAssassin update channel (updates.spamassassin.org). Overall the setup seems to be reasonably effective but some types of spam seem to wander right through it even though I've made repeated attempts at training spamassassin. My guesstimate is that about 85%-90% of the spam that gets through policyd-weight makes it through the filters and it's been getting a lot worse recently as spammers are getting better at working their way through filters. Can someone recommend additional sources of filters to make SpamAssassin more effective? So far I've found OpenProtect's update channel but are there others worth looking at?

    Read the article

  • RAIDZ vs RAID1+0

    - by Hiro2k
    Hi guys I just got 4 SSDs for my FreeNAS box. This server is only used to serve a single iSCSI extent to my Citrix XenServer pool and was wondering if I should setup them up in a RAIDZ or a RAID 1+0 configuration. This isn't used for anything in production, just for my test lab so I'm not sure which one is going to be better in this scenario. Will I see a major difference in speed or reliability? Currently the server has three 500GB Western Digital Blue drives and it's dog slow when I deploy a new version of our software on it, hence the upgrade.

    Read the article

  • Merging Waterfall and Agile – Getting the Worst of Both Worlds

    - by Nick Harrison
    Many people have seen and appreciate the elegance and practicality of agile methodologies.   Sadly there is still not widespread adoption.   There is still push back from many directions and from many different sources.   Some people don't understand how it is supposed to work. Some people don't believe that it could possibly work. Some people mistakenly believe that it is just code for a lazy project team trying to wiggle out of structure Some people mistakenly believe that it can work only with a very small highly trained team Some people are afraid of the control that they feel they will be losing. I have seen some people try to merge agile and water fall hoping to achieve the best of both worlds.   Unfortunately, the reality is that you end up with the worst of both worlds.   And they both can get pretty bad. Another Sad Reality Some people in an effort to get buy in for following an Agile Methodology have attempted to merge these two practices.   Sometimes this may stem from trying to assuage individual fears that they are not losing relevance.   Sometimes it may be to meet contractual obligations or to fulfill regulatory requirements.   Sometimes may not know better. These two approaches to software development cannot coexist on the same project. Let's review the main tenants of the Agile Manifesto: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. Meanwhile the main tenants of the Waterfall Approach could be summarized as: Processes and procedures over individuals Comprehensive documentation proves that the software works Well defined contracts and negotiations protects the customer relationship If the plan is made right, there should be no change  Merging these two approaches will always end badly.

    Read the article

  • Web server behind MikroTik and dynamic dns

    - by danielrvt
    I recently purchased a MikroTik router, it works great! However, I haven't been able to make my web server work from outside my lan I'll explain better: I have two domains in my disposal, before I switched to Mikrotik, the were working perfectly and all my websites were online. Since I changed the router, every time I try to access my websites from outside my lan, my websites can't be found. I have my websites domains associated with a dynamic dns provider, I managed to create a port forwarding rule to redirect all my incoming traffic from port 80 to my web server, and it works, but only when I'm connected to my MikroTik router. Is there something else I have to do? PD: I also created a static dns rule in my router with my domains to associate it to my webserver (which is behind my router) PD2: All I want is to redirect requests from outside to my webserver...

    Read the article

  • How would i down-sample a .wav file then reconstruct it using nyquist? - in matlab [closed]

    - by martin
    This is all done in MatLab 2010 My objective is to show the results of: undersampling, nyquist rate/ oversampling First i need to downsample the .wav file to get an incomplete/ or impartial data stream that i can then reconstuct. Heres the flow chart of what im going to be doing So the flow is analog signal - sampling analog filter - ADC - resample down - resample up - DAC - reconstruction analog filter what needs to be achieved: F= Frequency F(Hz=1/s) E.x. 100Hz = 1000 (Cyc/sec) F(s)= 1/(2f) Example problem: 1000 hz = Highest frequency 1/2(1000hz) = 1/2000 = 5x10(-3) sec/cyc or a sampling rate of 5ms This is my first signal processing project using matlab. what i have so far. % Fs = frequency sampled (44100hz or the sampling frequency of a cd) [test,fs]=wavread('test.wav'); % loads the .wav file left=test(:,1); % Plot of the .wav signal time vs. strength time=(1/44100)*length(left); t=linspace(0,time,length(left)); plot(t,left) xlabel('time (sec)'); ylabel('relative signal strength') **%this is were i would need to sample it at the different frequecys (both above and below and at) nyquist frequency.*I think.*** soundsc(left,fs) % shows the resaultant audio file , which is the same as original ( only at or above nyquist frequency however) Can anyone tell me how to make it better, and how to do the various sampling at different frequencies?

    Read the article

  • OpenVPN DNS: VPN DNS stomping local VPN

    - by Eddie Parker
    I've finally noodled with OpenVPN enough to get it working. Even better, I can mount samba drives, ping network machines through the TUN device, etc - it's all great. However, I'm noticing that if I use the following directive, then some of the machines that are normally visible by the client, on the client's side (i.e., not through the VPN) get masked with some other server out on the Internet. push "dhcp-option DNS 10.0.1.1" # Push our local DNS to clients Is there any way to avoid this, besides hacking the 'hosts' file on the client machine? Ideally I'd like to only use my VPN's DNS for machines within that domain.

    Read the article

  • Campus Network Design - Firewalls

    - by user3081239
    I am designing a campus network, and the design looks like this: LINX is The London Internet Exchange and JANET is Joint Academic Network. My goal is an almost-fully redundant with high availability, because it will have to support about 15k people, including academic staff, administrative staff and students. I have read some documents in the process , but I am still not sure about some aspects. I want to dedicate this one to firewalls: what are the driving factors in deciding to employ a dedicated firewall, instead of an embedded firewall in the border router? From what I can see, an embedded firewall has these advantages: Easier to maintain Better integration One less hop Less space requirement Cheaper Dedicated firewall has the advantage of being modular. Is there anything else? What am I missing?

    Read the article

  • The Other "C" in CRM

    - by Brian Dayton
    Folks who know me know that I rarely, if ever, talk politics. And I never talk politicians. Having grown up in a household with one parent leaning left and the other leaning to the right it was the best way to keep the peace. This isn't about politics. It's about "constituents" and the need to improve the services and service levels for people--at the city, county, state/province, etc. level all the way up to national governments. As a citizen and tax payer it's also important to me that these services be provided at a reasonable cost. If there's a better and more efficient way to do something then it's my hope that a public sector organization takes advantage of technology the same way private sector companies do. Social services organizations have a complex job. They provide the services that people need, from healthcare and children's assistance to helping people find jobs. But many of these organizations are still managing these processes manually or outdated, home-grown applications that could have been written up to 30 years ago. A lot has changed in technology. On the (this is as political as I'm going to get) political front, stakeholders like you and me are expecting greater transparency on where and how funds are spent. I'll admit that most of the time, when I think about CRM systems, I think about my experience as a customer of my bank, utilities company or cable operator. But now that I'm older, have children and a house--I find myself interacting more and more with agencies and services organizations. My experiences are sometimes good and sometimes not so good. Along those lines, last week's announcement of Siebel CRM 8.2 for Public Sector caught my eye. You may not work in the public sector, but you are a constituent of some--actually a lot--of public sector organizations. I don't know which CRM systems city and county utilize but I'm going to start paying closer attention.

    Read the article

  • TeamViewer cannot connect

    - by Cetin Sert
    Last week I decided to use TeamViewer VPN to administer software on a server behind a firewall using RemoteDesktop. It was easy to configure to start-up with the system and make VPN available on the other side but now it fails to connect at the step shown below: The remote machine is running Windows Server 2008 R2. Is there a native way to circumvent the external firewall using a server role or feature to make Windows Server do the VPN work? Do people have better / more reliable experiences with other products such as Hamachi? The requirements are as follows: Start at remote system start-up time Make VPN connections to the remote machine possible

    Read the article

  • SharePoint 2010 Design & Deployment Best Practices

    - by Michael Van Cleave
    Well now that SharePoint 2010 has successfully launched and everyone is scratching for every piece of best practices information they can get their hands on, I would like to invite anyone and everyone to come and take part in ShareSquared's next webinar. The webinar will cover some key information such as: Pros and cons of the different approaches to installing and configuring SharePoint 2010 Configuration Best Practices for SharePoint 2010 farms Services architecture; dependencies, licensing, and topologies Information Architecture guidance for sizing, multilingual support, multi-tenancy, and more. Using tools such as SharePoint Composer and SharePoint Maestro to configure and deploy SharePoint 2010 And most of all, avoiding common pitfalls for installation and deployment. What is better than all of that? Well, the even more exciting thing is that the presenters will be our very own SharePoint MVP's Gary Lapointe and Paul Stork. If you don't know who these guys are then you should definitely check out their blogs and their contributions to the SharePoint community. To get more information and register click here: REGISTER Other great links to information in this post: ShareSquared, Inc Gary Lapointe's Blog Paul Stork's Blog SharePoint Composer Check it out and get up to speed from some of the best in the industry. Michael

    Read the article

  • Pagination, Duplicate Content, and SEO

    - by Iamtotallylost
    Please consider a list of items (forum comments, articles, shoes, doesn't matter) which are spread over multiple pages. Different sort orders are supported (by date, by popularity, by price, etc). So, an URL might look like this (I use the query style here to simplify things): /items?id=1234&page=42&sort=popularity /items?id=1234&page=5&sort=date Now, in terms of SEO, I think I should be worried about duplicate content. After all, each item appears at least as many times as there are sort orders. I've seen Matt Cutts talking about the rel=canonical link tag, but he also said that the canonical page should have very similar content. But this is not the case here because page #1 in a non-canonical sort order might have completely different items than page #1 in the canonical sort order. For a given non-canonical page, there is no clear canonical page listing all the same items, so I think rel=canonical won't help here. Then I thought about using the noindex meta tag on all pages with non-canonical sort order, and not using it on all pages with canonical sort order. However, if I use that method, what will happen with backlinks that are going to non-canonical pages -- will they still spread their page rank juice, even though the first page googlebot (or any other crawler) is going to encounter is marked as "noindex"? Can you please comment on my problem and what you think is the best solution? If you think you have a better solution, please consider that 1) I do not want to use Javascript for this, 2) I do not want all the items to be on one page. Thank you.

    Read the article

  • Thinking of Adopting the PRINCE2™ Project Management Methodology? Consider Using PeopleSoft Projects to Help

    - by Megan Boundey
    Ever wondered what the PRINCE2™ project management methodology is? Ever wondered if you could use PeopleSoft Projects (ESA) to manage your projects using PRINCE2™?  Published by the Office of Government Commerce in the UK, PRINCE2™ is a scalable, business case and product description-driven Project Management methodology based upon managing by exception. Project activities are organized around fulfilling and meeting the product description. Quality assurance, configuration control and risk management are all based upon ensuring that the product delivered accurately meets the product description. PRINCE2™ is built upon seven principles and seven themes, each underpinning the PRINCE2™project management processes. Important for today’s business environment, the focus throughout PRINCE2™ is on the Business Case, which describes the rationale and business justification for a project. The Business Case drives all the project management processes from initial project setup to successful finish. PRINCE2™, as a method and a certification, is adopted in many countries worldwide, including the UK, Western Europe and Australia. We’ve just released a new white paper, which provides you with an overview of the principles, themes and project management processes associated with PRINCE2™. It also shows how these map to the functionality available within PeopleSoft Projects (ESA). In the time it takes to drink a coffee, you can learn about PRINCE2™ and determine whether it might help you deliver better project results. We encourage you to take a look.

    Read the article

  • APCUPSD and Hyper V R2

    - by Jason Berg
    I'm about to deploy a Windows Server 2008 R2 machine with the Hyper V role. I'd like to get away from having to use a network management card with my APC UPS as I'm only shutting down 1 server (it just seems like an unneeded point of failure). I'd like to look into using apcupsd instead. Will this work properly if I use a serial connection? Have you got it working yourself? How is the SNMP monitoring? I really like being able to easily monitor my UPS with SNMP when powerchute is installed. Will I be sacrificing this completely? Is the network management card really the way to go with this? If so, why? Bonus question: Is there a better UPS out there that I should be recommending in the future?

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >