Search Results

Search found 4714 results on 189 pages for 'unbuffered queries'.

Page 101/189 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • MongoDB PHP EC2 Setup Configuration

    - by nathansizemore
    I am new to web development and server set up. I am looking for some advice or a link to a tutorial on setting up a production system up. Right now, I have a server (Ubuntu, Apache, MongoDB, and PHP). It receives a request, PHP queries Mongo, and PHP sends out the requested data. How do I make that work with more servers? I've read that you can make a cluster of a primary and two slave nodes which work as separate servers running Mongo, but do those also run PHP? Or is the primary the only one running the PHP? I have read some docs on Mongo site and a video of someone from 10gen going through it, but they are geared towards people that seem to already understand this stuff, I have no idea and need to start from a beginning stage. If anyone can help me understand where PHP (Acting as my API) lives in these clusters, that would be greatly appreciated! Thanks in advance for any help!

    Read the article

  • CPU Configuration Issue for 2 Servers (Server 2008 R2)

    - by Bill Moreland
    I have 2 servers running the exact same Classic ASP code with Access DBs (yes, not ideal, but it is what it is, for now). 1) Xeon 5520 @ 2.27 GHz (6 GB Memory) 2) Xeon E5-2620 @ 2.00 GHz (2 processors, 32 GB Memory) For most pages the newer E5-2620 processes the pages between 10-15% faster. On pages requiring heavy and/or multiple complicated access stored procedures (queries) the older 5520 does a much better job. I believe the servers are configured nearly identically. My question: is it possible that the newer, multi-processor server is not as good at handling Classic ASP as the older single processor? Is there a configuration difference that needs to be in place that I'm missing since I'm shooting for identical implementations?

    Read the article

  • my.ini optimization on Windows 2008 R2 VPS

    - by MKphpDev
    I have a vmware VPS running Windows Server 2008 R2 Enterprise that has performance issues with MySQL. Every few minutes, MySQL stall for few seconds then responed to queries. I'm sure that my.ini need to be optimized, but unfortunately, I don't have any idea of my.ini configuration. What's running on the server: 2 small wordpress blogs, 1 vbulletin forums (approx. 1.2 GB database, and increasing), small database for some sort of plug-ins (no more than 4000 records) Server Info: Processor: Intel Xeon X5550 @ 2.67GHz, RAM: 6 GB (memory useage never exceeded 2 GB), MySQL 5.5, PHP 5.3.10, IIS 7 current my.ini: [mysqld] default-storage-engine=INNODB sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE _USER,NO_ENGINE_SUBSTITUTION" max_connections=250 myisam_max_sort_file_size=20G innodb_additional_mem_pool_size=256M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=8M innodb_buffer_pool_size=512MB innodb_log_file_size=128M innodb_thread_concurrency=10 key_buffer_size = 512M myisam_sort_buffer_size = 8M join_buffer_size = 256K read_buffer_size = 256K sort_buffer_size = 256K table_cache = 4000 thread_cache_size = 200 wait_timeout = 30 connect_timeout = 10 tmp_table_size = 32M max_allowed_packet = 1M max_connect_errors = 10000 query_cache_size = 16M query_cache_limit = 2M query_cache_type = 1 query_cache_min_res_unit = 1024 query_prealloc_size = 16384 query_alloc_block_size = 16384 skip-external-locking read_rnd_buffer_size=1M max_heap_table_size=16M thread_concurrency=8 [mysqld_safe] open_files_limit = 8192 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M any help with that, please?

    Read the article

  • minimum required bandwidth for remote database server

    - by user66734
    I want to build a small warehousing application for my company. We have a central warehouse which distributes to 8 sales points across the country. They insist on an in-house solution. I am thinking to setup a central mySQL db Linux server and have the branches connect to it to store sales. Queries to the db from the branches will be minimum, maybe 10 per hour. However I need all the branches to be able to store each sale data ( product ID, customer ID ) in the central db at peak time at most once every five minutes. My question is can I get away with simple 24mbps/768kbps DSL lines? If not what is the bandwith requirement? Can I rely on a load balancing router to combine additional lines if needed? Can you propose some server hardware specs?

    Read the article

  • QPS for dnscache

    - by vedaprasad
    I have 2 internal DNS servers ( ns1 & ns 2 ) on ubuntu 12.04 which run dnscache , and my clients resolv.conf have something like nameserver ns1 nameserver ns2 nameserver 8.8.8.8 since all my load is taken by ns1 , where as ns2 sits idle until ns1 is down or not serving my request . i would like to add these 2 server under a LB VIP . but my network team wants to know the QPS of the ns servers so that their LB is loaded . so is there any way to find out the QPS of dns queries running Dncache

    Read the article

  • IIS asks for login/pass when accessed using hostname but not when ‘localhost’ is used. Why?

    - by sb
    Hi All, I have setup IIS on my xp machine and have setup a default homepage (that comes with the IIS installed). It is a help page I think. when I access the page with http : // localhost it works fine (IE/Chrome or FF) but when I access it using http://hostname it prompts for a loging/password and works when I enter my domain id and password on the intranet. I have ensured that "anonymous access" is enabled in the properties window of the default site and "websites" node. I searched stack overflow for similar queries but some indicate I need to change the IE/FF settings to allow "integrated security" etc and some suggest to look at the "log file". I don't want to change the IE setting and there is nothing unusual in the log file of the IIS Server. Can anybody help me figure out why this is happening? thank you sb

    Read the article

  • What is this PHP process? It is crippling my server

    - by user1019588
    This process has been using 65% of my site CPU and has lasted for about 10 minutes now (aren't processes only supposed to go for a couple seconds?) It is obviously something with mysql. This makes sense because I have a lot of queries going, but something still seems a bit odd... This could have something to do with my bad PDO connection that I mentioned in the previous question. Perhaps I am opening too many connections or something like that? Here is the stats on it: Owner: mysql Priority: 0 CPU %: 61.1 Memory %: 0.4 Command:/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/cvps54834319.myhost.com.err --pid-file=/var/lib/mysql/cvps54834319.myhost.com.pid Thanks for any help on this. I have over 10GHZ on my server so this is very concerning to me.

    Read the article

  • DNSCache excesive memory usage Windows Server 2008 R2

    - by MikeT
    We are having an issue with the dnscache service where its memory usage is becoming excessive (~6GB) after a week or two. Restarting the service frees this memory but performing ipconfig /flushdns does not, an ipconfig /displaydns shows aprox 15-20 entries in the cache. We have checked and there appears to be aprox 150 DNS queries per second taking place but I would not expect this to have the effect of causing this memory issue. I have tried to search MSDN for hotfixes or bug reports but I could only find a reference to a memory leak in windows 2003. can anyone suggest how to proceed.

    Read the article

  • PHP's page generation time takes 0.01s. 1/0.01 = 100; however i'm having problems reaching that number of request per seconds. Why?

    - by cedivad
    On average, my PHP page generation time is 10ms. So i should be able to execute 100 requests one after the other one (using a single core on the server, since that php is not multithreaded). However, i'm having problems reaching 50 pages per seconds. As of now i do 25 on avg., with a medium load. The application is really light, it consist in a read (<5KB) from a pool of SSDs, some read queries solved by indexes. Where should i look to solve this bottleneck?

    Read the article

  • non-mapped virtual memory & total number of connections

    - by tszming
    We have two MongoDB data nodes (replica set) - Primary & Secondary. I noticed that the non-mapped virtual memory is relatively high and wondering if they are hurting our MongoDB performance (The server usually peaked at around 6-7K queries per sec). In MMS, it was stated: "The most common case of usage of a high amount of memory for non-mapped is that there are very many connections to the database." So we checked the memory usage with db.serverStatus().mem in our Secondary: { "bits" : 64, "resident" : 6846, "virtual" : 416797, "supported" : true, "mapped" : 205549, "mappedWithJournal" : 411098, "note" : "virtual minus mapped is large. could indicate a memory leak" } Note: We are using 2.0.4 and now the default stack size should be 1MB per connection. The current number of connections is around 1.1K, but the non-mapped virtual memory (virtual-mappedWithJournal) is around 5699 MB. The trend is quite stable so I can't say there is a leak here, but where is the memory gone? Any idea?

    Read the article

  • Should I upgrade memory for my dedicated server?

    - by mc1000
    My memory usage hangs around 25% (swap is generally 1%) on my dedicated server and load is around 2-5. My host recommended that I upgrade from 2GB of ram to 4GB so that I can increase my innodb_buffer from the default 16MB to 2GB. My innodb table size is 2GB. My question is, given that ram usage is 25% does it make sense to upgrade ram? Queries are hanging sometimes, so I'm thinking that a bigger innodb_buffer could decrease load on the database, just not sure if I really need to upgrade my ram first.

    Read the article

  • Get other ldap query strings associated with a domain.

    - by seekerOfKnowledge
    I have in Softerra LDAP Administration something like the following: server: blah.gov OU=Domain Controllers etc... ldap://subdomain.blah.gov I can't figure out how to, in C#, get those other ldap subdomain query strings. I'm not sure how else to explain it, so ask questions and I'll try to clarify. Updated: This is what Softerra LDAP Administrator looks like. The ldap queries near the bottom are not children of the above node, but somehow, the program knows about them and linked them in the GUI. If I could figure out how, that would fix my problem.

    Read the article

  • Webserver - Memory-bound or CPU-bound? [closed]

    - by JJP
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Web Sites I'm installing a social network using Zend Framework & MySql, with lots of plugins & queries. I want Webserver & Sql server on one box. I'm trying to choose between two machines (on hetzner.de): A) intel i7-2600 3.4 GHz 16 GB DDR3 RAM B) intel i7-920 2.6 GHz 24 GB DDR3 RAM B has 50% more RAM but 30% slower clock speed. Q is: is it obvious where the bottleneck will be? Would I ever need 24GB of RAM, even with lots of concurrent users?

    Read the article

  • SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup

    - by pinaldave
    In recent times I have observed that not many people have proper understanding of what is bookmark lookup or key lookup. Increasing numbers of the questions tells me that this is something developers are encountering every single day but have no idea how to deal with it. I have previously written three articles on this subject. I want to point all of you looking for further information on the same post. SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 SQL SERVER – Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries In one of my recent class we had in depth conversation about what are the alternative of creating covering indexes to remove the bookmark lookup. I really want to this question open to all of you and see what community thinks about the same. Is there any other way then creating covering index or included index to remove his expensive keylookup? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SQLAuthority News – A Successful Performance Tuning Seminar at Pune – Dec 4-5, 2010

    - by pinaldave
    This is report to my third of very successful seminar event on SQL Server Performance Tuning. SQL Server Performance Tuning Seminar in Colombo was oversubscribed with total of 35 attendees. You can read the details over here SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010. SQL Server Performance Tuning Seminar in Hyderabad was oversubscribed with total of 25 attendees. You can read the details over here SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010. The same Seminar was offered in Pune on December 4,-5, 2010. We had another successful seminar with lots of performance talk. This seminar was attended by 30 attendees. The best part of the seminar was that along with the our agenda, we have talked about following very interesting concepts. Deadlocks Detection and Removal Dynamic SQL and Inline Code SQL Optimizations Multiple OR conditions and performance tuning Dynamic Search Condition Building and Improvement Memory Cache and Improvement Bottleneck Detections – Memory, CPU and IO Beginning Performance Tuning on Production Parametrization Improving already Super Fast Queries Convenience vs. Performance Proper way to create Indexes Hints and Disadvantages I had great time doing the seminar and sharing my performance tricks with all. The highlight of this seminar was I have explained the attendees, how I begin doing performance tuning when I go for Performance Tuning Consultations.   Pinal Dave at SQL Performance Tuning Seminar SQL Server Performance Tuning Seminar Pinal Dave at SQL Performance Tuning Seminar Pinal Dave at SQL Performance Tuning Seminar SQL Server Performance Tuning Seminar SQL Server Performance Tuning Seminar This seminar series are 100% demo oriented and no usual PowerPoint talk. They are created from my experiences of various organizations for performance tuning. I am not planning any more seminar this year as it was great but I am booked currently for next 60 days at various performance tuning engagements. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • Visual Studio 2010 Installation Screenshots, links to installation Guides, Forum

    Today Installed Visual Studio 2010 in my new Sony Vaio laptop. I’ve habit of taking screen shots while setups are running. It helps me if I want to find the items what I installed earlier for that software. but taking screen shots is not required for the software's like Visual Studio as it provides add/remove items at anytime. Below are the screen shorts for the members are you new to Visual Studio installation, it’s pretty much easy and self understandable if you follow the instructions mentioned in installation wizard. I thought it does several system restarts as earlier versions, but VS2010 did not restart the machine. Just it said successfully installed. You might want to refer this link for further assistance. You can also ask your queries in this forum. You can also find installation guide. Happy coding with Visual Studio 2010 :-) You might also want to other articles 27 New Features of .NET Framework 4.0 New features of IIS 7.0 22 New Features of Visual Studio 2008 for .NET Professionals             span.fullpost {display:none;}

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • SQLAuthority News – Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 – Microsoft Whitepaper

    - by pinaldave
    I recently presented session on Statistics and Best Practices in Virtual Tech Days on Nov 22, 2010. The sessions was very popular and I got many questions right after the sessions. The number question I had received was where everybody can get the further information. I am very much happy that my sessions created some curiosity for one of the most important feature of the SQL Server. Statistics are the heart of the SQL Server. Microsoft has published a white paper on the subject how statistics are useful to Query Optimizer. Here is the abstract of the same white paper from Microsoft. Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 Writer: Eric N. Hanson and Yavor Angelov Microsoft SQL Server 2008 collects statistical information about indexes and column data stored in the database. These statistics are used by the SQL Server query optimizer to choose the most efficient plan for retrieving or updating data. This paper describes what data is collected, where it is stored, and which commands create, update, and delete statistics. By default, SQL Server 2008 also creates and updates statistics automatically, when such an operation is considered to be useful. This paper also outlines how these defaults can be changed on different levels (column, table, and database). In addition, it presents how certain query language features, such as Transact-SQL variables, interact with use of statistics by the optimizer, and it provides guidance for using these features when writing queries so you can obtain good query performance. Link to white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 ?Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • Book Review: &ldquo;Inside Microsoft SQL Server 2008: T-SQL Querying&rdquo; by Itzik Ben-Gan et al

    - by Sam Abraham
    In the past few weeks, I have been reading “Inside Microsoft SQL Server 2008: T-SQL Querying” by Itzik Ben-Gan et al. In the next few lines, I will be providing a quick book review having finished reading this valuable resource on SQL Server 2008. In this book, the authors have targeted most of the common as well as advanced T-SQL Querying scenarios that one would use for development on a SQL Server database. Book content covered sufficient theory and practice to empower its readers to systematically write better performance-tuned queries. Chapter one introduced a quick refresher of the basics of query processing. Chapters 2 and 3 followed with a thorough coverage of applicable relational algebra concepts which set a good stage for chapter 4 to dive deep into query tuning. Chapter 4 has been my favorite chapter of the book as it provided nice illustrations of the internals of indexes, waits, statistics and query plans. I particularly appreciated the thorough explanation of execution plans which helped clarify some areas I may have not paid particular attention to in the past. The book continues to focus on SQL operators tackling a few in each chapter and covering their internal workings and the best practices to follow when used. Figures and illustrations have been particularly helpful in grasping advanced concepts covered therein. In conclusion, Inside Microsoft SQL Server 2008: T-SQL Querying provided me with 750+ pages of focused, advanced and practical knowledge that has added a few tips and tricks to my arsenal of query tuning strategies. Many thanks to the O’Reilly User Group Program and its support of our West Palm Beach Developers’ Group. --Sam Abraham

    Read the article

  • SQL SERVER – A Puzzle – Illusion – Confusion – April Fools’ Day

    - by pinaldave
    Today is April 1st and just like every other year, I like to bring something interesting and light for the day. Atleast there should be days in every one’s life when they should feel easy. Here is a quick puzzle for you and I believe it will make you feel extremely smart if you can figure out the result behind the same. Run following in SQL Server Management Studio and observe the output: SELECT 30.0/(-2.0)/5.0; SELECT 30.0/-2.0/5.0; Here are few questions for you: 1) What will be the result of above two queries? 2) Why? If you think you can figure out the result without executing them – I encourage you to execute BOTH of them in SSMS and see if they give you same result or different result. Well, now I am waiting for your answer here – why? I often post similar things on my facebook page http://facebook.com/SQLAuth – you are welcome to play with me there. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Implementing a generic repository for WCF data services

    - by cibrax
    The repository implementation I am going to discuss here is not exactly what someone would call repository in terms of DDD, but it is an abstraction layer that becomes handy at the moment of unit testing the code around this repository. In other words, you can easily create a mock to replace the real repository implementation. The WCF Data Services update for .NET 3.5 introduced a nice feature to support two way data bindings, which is very helpful for developing WPF or Silverlight based application but also for implementing the repository I am going to talk about. As part of this feature, the WCF Data Services Client library introduced a new collection DataServiceCollection<T> that implements INotifyPropertyChanged to notify the data context (DataServiceContext) about any change in the association links. This means that it is not longer necessary to manually set or remove the links in the data context when an item is added or removed from a collection. Before having this new collection, you basically used the following code to add a new item to a collection. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; var context = new OrderContext(); context.AddToOrders(order); context.AddToOrderItems(item); context.SetLink(item, "Order", order); context.SaveChanges(); Now, thanks to this new collection, everything is much simpler and similar to what you have in other ORMs like Entity Framework or L2S. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; order.Items.Add(item); var context = new OrderContext(); context.AddToOrders(order); context.SaveChanges(); In order to use this new feature, you first need to enable V2 in the data service, and then use some specific arguments in the datasvcutil tool (You can find more information about this new feature and how to use it in this post). DataSvcUtil /uri:"http://localhost:3655/MyDataService.svc/" /out:Reference.cs /dataservicecollection /version:2.0 Once you use those two arguments, the generated proxy classes will use DataServiceCollection<T> rather than a simple ObjectCollection<T>, which was the default collection in V1. There are some aspects that you need to know to use this feature correctly. 1. All the entities retrieved directly from the data context with a query track the changes and report those to the data context automatically. 2. A entity created with “new” does not track any change in the properties or associations. In order to enable change tracking in this entity, you need to do the following trick. public Order CreateOrder() {   var collection = new DataServiceCollection<Order>(this.context);   var order = new Order();   collection.Add(order);   return order; } You basically need to create a collection, and add the entity to that collection with the “Add” method to enable change tracking on that entity. 3. If you need to attach an existing entity (For example, if you created the entity with the “new” operator rather than retrieving it from the data context with a query) to a data context for tracking changes, you can use the “Load” method in the DataServiceCollection. var order = new Order {   Id = 1 }; var collection = new DataServiceCollection<Order>(this.context); collection.Load(order); In this case, the order with Id = 1 must exist on the data source exposed by the Data service. Otherwise, you will get an error because the entity did not exist. These cool extensions methods discussed by Stuart Leeks in this post to replace all the magic strings in the “Expand” operation with Expression Trees represent another feature I am going to use to implement this generic repository. Thanks to these extension methods, you could replace the following query with magic strings by a piece of code that only uses expressions. Magic strings, var customers = dataContext.Customers .Expand("Orders")         .Expand("Orders/Items") Expressions, var customers = dataContext.Customers .Expand(c => c.Orders.SubExpand(o => o.Items)) That query basically returns all the customers with their orders and order items. Ok, now that we have the automatic change tracking support and the expression support for explicitly loading entity associations, we are ready to create the repository. The interface for this repository looks like this,public interface IRepository { T Create<T>() where T : new(); void Update<T>(T entity); void Delete<T>(T entity); IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties); IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties); void Attach<T>(T entity); void SaveChanges(); } The Retrieve and RetrieveAll methods are used to execute queries against the data service context. While both methods receive an array of expressions to load associations explicitly, only the Retrieve method receives a predicate representing the “where” clause. The following code represents the final implementation of this repository.public class DataServiceRepository: IRepository { ResourceRepositoryContext context; public DataServiceRepository() : this (new DataServiceContext()) { } public DataServiceRepository(DataServiceContext context) { this.context = context; } private static string ResolveEntitySet(Type type) { var entitySetAttribute = (EntitySetAttribute)type.GetCustomAttributes(typeof(EntitySetAttribute), true).FirstOrDefault(); if (entitySetAttribute != null) return entitySetAttribute.EntitySet; return null; } public T Create<T>() where T : new() { var collection = new DataServiceCollection<T>(this.context); var entity = new T(); collection.Add(entity); return entity; } public void Update<T>(T entity) { this.context.UpdateObject(entity); } public void Delete<T>(T entity) { this.context.DeleteObject(entity); } public void Attach<T>(T entity) { var collection = new DataServiceCollection<T>(this.context); collection.Load(entity); } public IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query.Where(predicate); } public IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query; } public void SaveChanges() { this.context.SaveChanges(SaveChangesOptions.Batch); } } For instance, you can use the following code to retrieve customers with First name equal to “John”, and all their orders in a single call. repository.Retrieve<Customer>(    c => c.FirstName == “John”, //Where    c => c.Orders.SubExpand(o => o.Items)); In case, you want to have some pre-defined queries that you are going to use across several places, you can put them in an specific class. public static class CustomerQueries {   public static Expression<Func<Customer, bool>> LastNameEqualsTo(string lastName)   {     return c => c.LastName == lastName;   } } And then, use it with the repository. repository.Retrieve<Customer>(    CustomerQueries.LastNameEqualsTo("foo"),    c => c.Orders.SubExpand(o => o.Items));

    Read the article

  • How to get ip-address out of SPAMHAUS blacklist?

    - by vgv8
    I frequently read that it is possible to remove individual ip-addresses from SPAMHAUS blacklisting. OK. Here is 91.205.43.252 (91.205.43.251 - 91.205.43.253) used by back3.stopspamers.com (back2.stopspamers.com, back1.stopspamers.com) in geo-cluster on dedicated servers in Switzerland. The queries: http://www.spamhaus.org/query/bl?ip=91.205.43.251 http://www.spamhaus.org/query/bl?ip=91.205.43.252 http://www.spamhaus.org/query/bl?ip=91.205.43.253 tell that: 91.205.43.251 - 91.205.43.253 are all listed in the SBL80808 blacklist And SBL80808 blacklist tells: "Ref: SBL80808 91.205.40.0/22 is listed on the Spamhaus Block List (SBL) 01-Apr-2010 05:52 GMT | SR04 Spamming and now seems this place is involved in other fraud" 91.205.43.251-91.205.43.253 are not listed amongst criminal ip-addresses individually but there is no way to remove it individually from black listing. How to remove this individual (91.205.43.251-91.205.43.253) addresses from SPAMHAUS blacklist? And why the heck SPAMHAUS is blacklisting spam-stopping service? This is only one example of a bunch. My related posts: Blacklist IP database Update: From the answer provided I realized that my question was not even understood. This ip-addresses 91.205.43.251 - 91.205.43.253 are not blacklisted individually, they are blacklisted through its supernet 91.205.40.0/22. Also note that dedicated server, ISP and customer are in much different distant countries. Update2: http://www.spamhaus.org/sbl/sbl.lasso?query=SBL80808#removal tells: "To have record SBL80808 (91.205.40.0/22) removed from the SBL, the Abuse/Security representative of RIPE (or the Internet Service Provider responsible for supplying connectivity to 91.205.40.0/22) needs to contact the SBL Team" There are dozens of "abusers" in that blacklist SBL80808. The company using that dedicated server is not an ISP or RIPE representative to treat these issues. Even if to treat it, it is just a matter of pressing "Report spam" on internet to be again blacklisted, this is fruitless approach. These techniques are broadly used by criminals and spammers, See also this my post on blacklisting. This is just one specific example but there are many-many more.

    Read the article

  • SQL SERVER – CTRL+SHIFT+] Shortcut to Select Code Between Two Parenthesis

    - by pinaldave
    Every weekend brings creative ideas and accidents brings best unknown secrets in front of us. Just a day while working with complex SQL Server code in SSMS I came across very interesting shortcut which I have never used before and instantly fell in love with it. It is totally possible that you are familiar with this but for me it was the first time and I was surprised that I did know know this short cut so far. Shortcut key is CTRL+SHIFT+]. This key can be very useful when dealing with multiple subqueries, CTE or query with multiple parentheses. When exercised this shortcut key it selects T-SQL code between two parentheses. Let us see the examples to understand the same. In each of the examples I have put the cursor at the position displayed and pressed CTRL+SHIFT+] and it has selected the code between two corresponding parentheses. Cursor position 1 Cursor position 2 Cursor position 3 If you are a developer and have to code with complex queries, you will totally appreciate that this feature can save so much of the time for development. I often remember my experience as a developer when I have lost a lot of hours to just balance parentheses. As I said yesterday I found this shortcut accidently. How many of you were aware of this feature? Is there any other useful feature you would like to share with us? Please leave a comment and if I have not covered it earlier, I will share it due credit on this blog. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Shortcut

    Read the article

  • SQL SERVER – Solution – Challenge – Puzzle – Usage of FAST Hint

    - by pinaldave
    Earlier I had posted quick puzzle and I had received wonderful response to the same from Brad Schulz. Today we will go over the solution. The puzzle was posted here: SQL SERVER – Challenge – Puzzle – Usage of FAST Hint The question was in what condition the hint FAST will be useful. In the response to this puzzle blog post here is what SQL Server Expert Brad Schulz has pointed me to his blog post where he explain how FAST hint can be useful. I strongly recommend to read his blog post over here. With the permission of the Brad, I am reproducing following queries here. He has come up with example where FAST hint improves the performance. USE AdventureWorks GO DECLARE @DesiredDateAtMidnight DATETIME = '20010709' DECLARE @NextDateAtMidnight DATETIME = DATEADD(DAY,1,@DesiredDateAtMidnight) -- Query without FAST SELECT OrderID=h.SalesOrderID ,h.OrderDate ,h.TerritoryID ,TerritoryName=t.Name ,c.CardType ,c.CardNumber ,CardExpire=RIGHT(STR(100+ExpMonth),2)+'/'+STR(ExpYear,4) ,h.TotalDue FROM Sales.SalesOrderHeader h LEFT JOIN Sales.SalesTerritory t ON h.TerritoryID=t.TerritoryID LEFT JOIN Sales.CreditCard c ON h.CreditCardID=c.CreditCardID WHERE OrderDate>=@DesiredDateAtMidnight AND OrderDate<@NextDateAtMidnight ORDER BY h.SalesOrderID; -- Query with FAST(10) SELECT OrderID=h.SalesOrderID ,h.OrderDate ,h.TerritoryID ,TerritoryName=t.Name ,c.CardType ,c.CardNumber ,CardExpire=RIGHT(STR(100+ExpMonth),2)+'/'+STR(ExpYear,4) ,h.TotalDue FROM Sales.SalesOrderHeader h LEFT JOIN Sales.SalesTerritory t ON h.TerritoryID=t.TerritoryID LEFT JOIN Sales.CreditCard c ON h.CreditCardID=c.CreditCardID WHERE OrderDate>=@DesiredDateAtMidnight AND OrderDate<@NextDateAtMidnight ORDER BY h.SalesOrderID OPTION(FAST 10) Now when you check the execution plan for the same, you will find following visible difference. You will find query with FAST returns results with much lower cost. Thank you Brad for excellent post and teaching us something. I request all of you to read original blog post written by Brad for much more information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >