Search Results

Search found 30085 results on 1204 pages for 'sql trace'.

Page 452/1204 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Redundant Microsoft server solution for small company

    - by MadBoy
    I'm planning to change one server Microsoft SBS 2003 with SharePoint, Exchange and SQL database into something that will provide me with some redundancy and won't be single point of failure. I was thinking to buy 2x exactly the same physical servers and put 2 virtualized servers on HyperV or VMWare on each. Then i would put SharePoint, Exchange and SQL on that 1 physical server (shared onto 2x VM's). I would like 2nd physical server to be exact duplicate of the first one so that when 1st server goes down (for reboot or hw failure), 2nd takes care of everything so that users don't even see anything changed (in terms all their emails, sharepoint stuff is available). My questions are: Will I have to pay for licenses for both servers even thou only one instance of SharePoint, Exchange, SQL will be used at same time? What are proposed solutions to do that? Any additional hardware I would need, any complicated software configuration to be expected to configure such redundancy so that when one physical server goes down 2nd one is taking care of rest? What problems should I expect? This solution is for 60 people. Later on it may or may expand.

    Read the article

  • storing map template in database

    - by Timigen
    I am working on an application that displays choropleth maps. These maps are of all different types, some display state by county, country by state/province, or world by country. How should I handle storing the map information in the database? My Thoughts: I won't need to do queries to find POI inside a region, so I don't think there is a need to use spatial datatypes. I am considering storing a map as a geoJSON object (I am using JS mapping library that accepts geoJSON). The only issue is what if I want a map of the US northeast. Then I would have geoJSON for the US and a separate one for the US northeast, which would be redundant. Would it make sense to have a shape database where I had each state then when I needed a map of the US I could query for each state, and when I needed a map of the US Northeast I could again query for what I need? Note: I am not concerned with storing the data for each region, just the region itself. I will query for the data on the fly for the specific region.

    Read the article

  • MySQL Online Database

    - by Marian
    Can anyone suggest a good online free MySQL database. I've tried four till now: db4free FreeMySQL onPhP 000webhost Either of them gave me an timeout error on my connect file or actively restricted connection to it, meaning the host won't allow a remote connection to the database. If there isn't any good online database can I create my own server using my computer, since it gets rarely turned off and when my server is offline I could return an error message saying that the server is currently offline. My final objective is to have a simple comment box for a webpage. Witch I believe it won't need a massive data storage with 3 columns (id, name, comment) NOTE: Can't post more then two links yet.

    Read the article

  • SQL SERVER Difference Between GRANT and WITH GRANT

    This was very interesting question recently asked me to during my session at TechMela Nepal. The question is what is the difference between GRANT and WITH GRANT when giving permissions to user.Let us first see syntax for the same.GRANT:USE master;GRANTVIEW ANY DATABASETO username;GOWITH GRANT:USE master;GRANTVIEW ANY DATABASETO username WITHGRANTOPTION;GOThe difference between both of this option [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Daily tech links for .net and related technologies - June 8-11, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - June 8-11, 2010 Web Development ASPNET MVC: Handling Multiple Buttons on a Form with jQuery - Donn Building a MVC2 Template, Part 14, Logging Services - Eric Simple Accordion Menu With jQuery & ASP.NET - Steve Boschi Conditional Validation in MVC -Simonince Creating a RESTful Web Service Using ASP.Net MVC Part 23 – Bug Fixes and Area Support - Shoulders of Giants Web Design The Principles Of Cross-Browser CSS Coding - Louis Lazaris Transparency...(read more)

    Read the article

  • SQL SERVER Quick Note of Database Mirroring

    Just a day ago, I was invited at Round Table meeting at prestigious organization. They were planning to implement High Availability solution using Database Mirroring. During the meeting, I have made few notes of what was being discussed there. I just thought it would be interested for all of you know about it.Database Mirroring works [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL SERVER What is Spatial Database? Developing with SQL Server Spatial and Deep Dive into Spatial

    What is Spatial Database?A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia)Today [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • TSQL formatting - a sure fire way to start a conversation.

    - by fatherjack
    There are probably as many opinions on ways to format code as there are people writing code and I am not here to say that any one is better than any other. Well, that isn't true. I am here to say that one way is better than another but this isn't a matter of preference or personal taste, this is an example of where sloppy formatting can cause TSQL to weird and whacky things but following some simple methods can make your code more reliable and more robust when . Take these two pieces of code, ready...(read more)

    Read the article

  • Highly recommended: "5 Things SQL Server does different from what many developers expect" by Nico Ja

    A couple of weeks ago, the Belgian Techdays were held in Antwerp. Together with Scott Hillier I presented the SharePoint pre-conference sessions (watch them online over here, search for pre-conference or SharePoint). Even though Belgium is not a very big country, the Microsoft team managed to get some high profile speakers like Anders Hejlsberg and Scott Hanselman. But if you have like 60 minutes to spare there is one session that I'd really recommend to check out, not related to SharePoint, but...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Serial plans: Threshold / Parallel_degree_limit = 1

    - by jean-pierre.dijcks
    As a very short follow up on the previous post. So here is some more on getting a serial plan and why that happens Another reason - compared to the auto DOP is not on as we looked at in the earlier post - and often more prevalent to get a serial plan is if the plan simply does not take long enough to consider a parallel path. The resulting plan and note looks like this (note that this is a serial plan!): explain plan for select count(1) from sales; SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 672559287 -------------------------------------------------------------------------------------- | Id  | Operation            | Name  | Rows  | Cost (%CPU)| Time     | Pstart| Pstop | -------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- |   0 | SELECT STATEMENT     |       |     1 |     5   (0)| 00:00:01 |       |     | |   1 |  SORT AGGREGATE      |       |     1 |            |          |       |     | |   2 |   PARTITION RANGE ALL|       |   960 |     5   (0)| 00:00:01 |     1 |  16 | |   3 |    TABLE ACCESS FULL | SALES |   960 |     5   (0)| 00:00:01 |     1 |  16 | Note -----    - automatic DOP: Computed Degree of Parallelism is 1 because of parallel threshold 14 rows selected. The parallel threshold is referring to parallel_min_time_threshold and since I did not change the default (10s) the plan is not being considered for a parallel degree computation and is therefore staying with the serial execution. Now we go into the land of crazy: Assume I do want this DOP=1 to happen, I could set the parameter in the init.ora, but to highlight it in this case I changed it on the session: alter session set parallel_degree_limit = 1; The result I get is: ERROR: ORA-02097: parameter cannot be modified because specified value is invalid ORA-00096: invalid value 1 for parameter parallel_degree_limit, must be from among CPU IO AUTO INTEGER>=2 Which of course makes perfect sense...

    Read the article

  • Shared Data Source name error underscore characters added

    - by mick
    The name of our shared data source in RS (report server) is "AF1 Live Database" (no underscore characters - just spaces between words) and is the same in report builder in VS. However, the following error pops up when the RDL of this report is uploaded onto our company site and run. (error we are receiving...) The report server cannot process the report or shared dataset. The shared data source 'AF1_Live_Database' for the report server or SharePoint site is not valid. Browse to the server or site and select a shared data source. (rsInvalidDataSourceReference) We have no idea why the error reports the shared data source as 'AF1_Live_Database' with underscore characters? As this appears to be the problem that keeps the report from running we are seeking your help, thanks.

    Read the article

  • Basics of Join Factorization

    - by Hong Su
    We continue our series on optimizer transformations with a post that describes the Join Factorization transformation. The Join Factorization transformation was introduced in Oracle 11g Release 2 and applies to UNION ALL queries. Union all queries are commonly used in database applications, especially in data integration applications. In many scenarios the branches in a UNION All query share a common processing, i.e, refer to the same tables. In the current Oracle execution strategy, each branch of a UNION ALL query is evaluated independently, which leads to repetitive processing, including data access and join. The join factorization transformation offers an opportunity to share the common computations across the UNION ALL branches. Currently, join factorization only factorizes common references to base tables only, i.e, not views. Consider a simple example of query Q1. Q1:    select t1.c1, t2.c2    from t1, t2, t3    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2   union all    select t1.c1, t2.c2    from t1, t2, t4    where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3; Table t1 appears in both the branches. As does the filter predicates on t1 (t1.c1 > 1) and the join predicates involving t1 (t1.c1 = t2.c1). Nevertheless, without any transformation, the scan (and the filtering) on t1 has to be done twice, once per branch. Such a query may benefit from join factorization which can transform Q1 into Q2 as follows: Q2:    select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                   from t2, t3                    where t2.c2 = t3.c2 and t2.c2 = 2                                  union all                   select t2.c1 item_1, t2.c2 item_2                   from t2, t4                    where t2.c3 = t4.c3) VW_JF_1    where t1.c1 = VW_JF_1.item_1 and t1.c1 > 1; In Q2, t1 is "factorized" and thus the table scan and the filtering on t1 is done only once (it's shared). If t1 is large, then avoiding one extra scan of t1 can lead to a huge performance improvement. Another benefit of join factorization is that it can open up more join orders. Let's look at query Q3. Q3:    select *    from t5, (select t1.c1, t2.c2                  from t1, t2, t3                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c2 = 2 and t2.c2 = t3.c2                 union all                  select t1.c1, t2.c2                  from t1, t2, t4                  where t1.c1 = t2.c1 and t1.c1 > 1 and t2.c3 = t4.c3) V;   where t5.c1 = V.c1 In Q3, view V is same as Q1. Before join factorization, t1, t2 and t3 must be joined first before they can be joined with t5. But if join factorization factorizes t1 from view V, t1 can then be joined with t5. This opens up new join orders. That being said, join factorization imposes certain join orders. For example, in Q2, t2 and t3 appear in the first branch of the UNION ALL query in view VW_JF_1. T2 must be joined with t3 before it can be joined with t1 which is outside of the VW_JF_1 view. The imposed join order may not necessarily be the best join order. For this reason, join factorization is performed under cost-based transformation framework; this means that we cost the plans with and without join factorization and choose the cheapest plan. Note that if the branches in UNION ALL have DISTINCT clauses, join factorization is not valid. For example, Q4 is NOT semantically equivalent to Q5.   Q4:     select distinct t1.*      from t1, t2      where t1.c1 = t2.c1  union all      select distinct t1.*      from t1, t2      where t1.c1 = t2.c1 Q5:    select distinct t1.*     from t1, (select t2.c1 item_1                   from t2                union all                   select t2.c1 item_1                  from t2) VW_JF_1     where t1.c1 = VW_JF_1.item_1 Q4 might return more rows than Q5. Q5's results are guaranteed to be duplicate free because of the DISTINCT key word at the top level while Q4's results might contain duplicates.   The examples given so far involve inner joins only. Join factorization is also supported in outer join, anti join and semi join. But only the right tables of outer join, anti join and semi joins can be factorized. It is not semantically correct to factorize the left table of outer join, anti join or semi join. For example, Q6 is NOT semantically equivalent to Q7. Q6:     select t1.c1, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t2.c2 (+) = 2  union all    select t1.c1, t2.c2    from t1, t2      where t1.c1 = t2.c1(+) and t2.c2 (+) = 3 Q7:     select t1.c1, VW_JF_1.item_2    from t1, (select t2.c1 item_1, t2.c2 item_2                  from t2                  where t2.c2 = 2                union all                  select t2.c1 item_1, t2.c2 item_2                  from t2                                                                                                    where t2.c2 = 3) VW_JF_1       where t1.c1 = VW_JF_1.item_1(+)                                                                  However, the right side of an outer join can be factorized. For example, join factorization can transform Q8 to Q9 by factorizing t2, which is the right table of an outer join. Q8:    select t1.c2, t2.c2    from t1, t2      where t1.c1 = t2.c1 (+) and t1.c1 = 1 union all    select t1.c2, t2.c2    from t1, t2    where t1.c1 = t2.c1(+) and t1.c1 = 2 Q9:   select VW_JF_1.item_2, t2.c2   from t2,             (select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 1           union all            select t1.c1 item_1, t1.c2 item_2            from t1            where t1.c1 = 2) VW_JF_1   where VW_JF_1.item_1 = t2.c1(+) All of the examples in this blog show factorizing a single table from two branches. This is just for ease of illustration. Join factorization can factorize multiple tables and from more than two UNION ALL branches.  SummaryJoin factorization is a cost-based transformation. It can factorize common computations from branches in a UNION ALL query which can lead to huge performance improvement. 

    Read the article

  • SQL Query for Determining SharePoint ACL Sizes

    - by Damon Armstrong
    When a SharePoint Access Control List (ACL) size exceeds more than 64kb for a particular URL, the contents under that URL become unsearchable due to limitations in the SharePoint search engine.  The error most often seen is The Parameter is Incorrect which really helps to pinpoint the problem (its difficult to convey extreme sarcasm here, please note that it is intended).  Exceeding this limit is not unheard of – it can happen when users brute force security into working by continually overriding inherited permissions and assigning user-level access to securable objects. Once you have this issue, determining where you need to focus to fix the problem can be difficult.  Fortunately, there is a query that you can run on a content database that can help identify the issue: SELECT [SiteId],      MIN([ScopeUrl]) AS URL,      SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY siteid ORDER BY AclSizeKB DESC This query results in a list of ACL sizes and entry counts on a site-by-site basis.  You can also remove grouping to see a more granular breakdown: SELECT [ScopeUrl] AS URL,       SUM(DATALENGTH([Acl]))/1024 as AclSizeKB,      COUNT(*) AS AclEntries FROM [Perms] (NOLOCK) GROUP BY ScopeUrl ORDER BY AclSizeKB DESC

    Read the article

  • Proper Data Structure for Commentable Comments

    - by Wesley
    Been struggling with this on an architectural level. I have an object which can be commented on, let's call it a Post. Every post has a unique ID. Now I want to comment on that Post, and I can use ID as a foreign key, and each PostComment has an ItemID field which correlates to the Post. Since each Post has a unique ID, it is very easy to assign "Top Level" comments. When I comment on a comment however, I feel like I now need a PostCommentComment, which attaches to the ID of the PostComment. Since ID's are assigned sequentially, I can no longer simply use ItemID to differentiate where in the tree the comment is assigned. I.E. both a Post and a Post Comment might have an ID of '5', so my foreign key relationship is invalid. This seems like it could go on infinitely, with PostCommentCommentComment's etc... What's the best way to solve this? Should I have a field in the comment called "IsPostComment" or something of the like to know which collection to attach the ID to? This strikes me as the best solution I've seen so far, but now I feel like I need to make recursive DataBase calls which start to get expensive. Meaning, I get a Post and get all PostComments where ItemID == Post.ID && where IsPostComment == true Then I take that as a collection, gather all the ID's of the PostComments, and do another search where ItemID == PostComment[all].ID && where IsPostComment == false, then repeat infinitely. This means I make a call for every layer, and if I'm calling 100 Posts, I might make 1000 DB calls to get 10 layers of comments each. What is the right way to do this?

    Read the article

  • MSSQLSERVER Will Not Start - Event ID 913 and 1814

    - by ThaKidd
    Hello ServerFault! I need some serious help. I have a major database server down and am scratching my head at how to fix it. The server was hit by rolling black outs last week in Dallas and sense then, Microsoft SQL 2005 SP2 will not start up. I am getting the following errors (both when starting the service and while trying to execute mssqlsrv.exe -c -f -m: Event Type: Error Event Source: MSSQLSERVER Event ID: 913 Could not find database ID 3. Database may not be activated yet or may be in transition. Reissue the query once the database is available. If you do not think this error is due to a database that is transitioning its state and this error continues to occur, contact your primary support provider. Please have available for review the Microsoft SQL Server error log and any additional information relevant to the circumstances when the error occurred. and... Event Type: Information Event Source: MSSQLSERVER Event ID: 1814 Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized. I have tried to rename the tempdb.mdf to tempdb.old with no success. I have checked and have 193 GB of free hard drive space. What else might cause this problem? Could the server need a chkdsk ran on it or do I need to be looking at some area of the database server? Any help is greatly appreciated. Thank you in advance.

    Read the article

  • Materialized View does not import properly when importing a db for a second time in another schema [closed]

    - by marinus
    When I import a database with materialized view mv_mt in just one schema there are no errors. create materialized view mv_mt refresh complete next trunc( sysdate ) + 1 as SELECT sysdate, media_type.* from media_type; But when I try to import the same database to a copy in another schema in the same tablespace I get the following errors: IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=>438,WHAT=>'dbms_refresh.refresh(''"ALEXANDRA"" "."MV_MT"'');',NEXT_DATE=>TO_DATE('2012-07-02:14:22:36','YYYY-MM-DD:HH24:MI:" "SS'),INTERVAL=>'sysdate + 1 / 24 / 60 / 6 ',NO_PARSE=>TRUE); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_JOB", line 100 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 23421: "BEGIN dbms_refresh.make('"ALEXANDRA"."MV_MT"',list=>null,next_date=>null," "interval=>null,implicit_destroy=>TRUE,lax=>FALSE,job=>438,rollback_seg=>NUL" "L,push_deferred_rpc=>TRUE,refresh_after_errors=>FALSE,purge_option => 1,par" "allelism => 0,heap_size => 0); END;" IMP-00003: ORACLE error 23421 encountered ORA-23421: job number 438 is not a job in the job queue ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86 ORA-06512: at "SYS.DBMS_IJOB", line 793 ORA-06512: at "SYS.DBMS_REFRESH", line 86 ORA-06512: at "SYS.DBMS_REFRESH", line 62 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 23410: "BEGIN dbms_refresh.add(name=>'"ALEXANDRA"."MV_MT"',list=>'"ALEXANDRA"."MV" "_MT"',siteid=>0,export_db=>'ORCL01'); END;" IMP-00003: ORACLE error 23410 encountered ORA-23410: materialized view "ALEXANDRA"."MV_MT" is already in a refresh group ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.DBMS_IREFRESH", line 484 ORA-06512: at "SYS.DBMS_REFRESH", line 140 ORA-06512: at "SYS.DBMS_REFRESH", line 125 ORA-06512: at line 1 Anyone any ideas?

    Read the article

  • Adding Actions to a Cube in SQL Server Analysis Services 2008

    Actions are powerful way of extending the value of SSAS cubes for the end user. They can click on a cube or portion of a cube to start an application with the selected item as a parameter, or to retrieve information about the selected item. Actions haven't been well-documented until now; Robert Sheldon once more makes everything clear.

    Read the article

  • It's called College.

    - by jeffreyabecker
    Today I saw yet another 'GUID vs int as your primary key' article. Like most of the ones I've read this was filled with technical misrepresentations and out-right fallices. Chef's famous line that "There's a time and a place for everything children" applies here. GUIDs have distinct advantages and disadvantages which should be considered when choosing a data type for the primary key. Fallacy 1: "Its easier" An integer data type(tinyint, smallint, int, bigint) is a better artifical key than a GUID because its easier to remember. I'm a firm believer that your artifical primary keys should be opaque gibberish. PK's are an implementation detail which should never be exposed to the user or relied on for business logic. If you want things to come back in an order, add and ORDER BY clause and SortOrder fields. If you want a human-usable look-up add a business key with a unique constraint. If you want to know what order things were inserted into a table add a timestamp. Fallacy 2: "Size Matters" For many applications, the size of the artifical primary key is going to be irrelevant. The particular article which kicked this post off stated repeatedly that joining against an int has better performance than joining against a GUID. In computer science the performance of your algorithm is always a function of the number of data points. This still holds true for databases. Unless your table is very large, the performance difference between an int and a guid probably isnt going to be mesurable let alone noticeable. My personal experience is that the performance becomes an issue when you start having billions of rows in the table. At this point, you should probably start looking to move from int to bigint so the effective space/performance gain isnt as much as you'd think. GUID Advantages: Insert-ability / Mergeability: You can reliably insert guids into tables without key collisions. Database Independence: Saving entities to the database often requires knowing ids. With identity based ids the id must be selected back after every insert. GUIDs can be generated application-side allowing much faster inserts. GUID Disadvantages: Generatability: You can calculate the next id for an integer pk pretty easily in your head but will need a program to generate GUIDs. Solution: "Select top 100 newid() from sysobjects" Fragmentation: most GUID generation algorithms generate pseudo random GUIDs. This can cause inserts into the middle of your clustered index. Solutions: add a default of newsequentialid() or use GuidComb in NHibernate.

    Read the article

  • Where is the TFS database?

    - by Blanthor
    I've been using TFS 2010 with no problems. I tried adding a user and I got the following error message. "TF30063: You are not authorized to access <serverName>\DefaultCollection. -The remote server returned an error: (401) Unauthorized." I remoted into the server, <serverName>, and opened the TFS Console. The logs mentioned a connection string: ConnectionString: Data Source=<serverName>\SS2008;Initial Catalog=Tfs_DefaultCollection;Integrated Security=True While remoted in I open SQL Server 2008 Management Studio opening the (local) server with Windows Authentication. It shows the connection to be (local)(SQL Server 9.04.03 - <serverName>\Admin), and there is no Tfs_DefaultCollection database. Can someone tell me what is going on? Was I wrong in connecting to this instance of the database (i.e. Is the log file the wrong place to find the connection string)? Is the database so corrupted that SQL Manager Studio cannot see it anymore, although TFS could? Should I be logging into Management Studio as user SS2008? btw I don't know of any such credentials.

    Read the article

  • Which would be a better way to load data via ajax

    - by Mike
    I am using google maps and returning html/lat/long from my MySQL database Currently A user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the Controller then queries the db, and returns the following data via JSON Lat/Long of the marker HTML for the popup window this is approximately 34 rows in the database across two tables per business the ajax call receives this data and then plots the marker along with the html onto the map The data that is returned from the controller is one big json object... This is done for all businesses that exist in the Video Production category (currently approx 40 businesses). As you can see, pulling this data for multiple categories (100s of businesses) can get very very taxing on the server. My question is Would it be more beneficial to modify the process flow as such: a user picks a business category e.g; "Video Production". an ajax call is sent to a CodeIgniter controller the controller then queries the database for the location base information lat/long level (used to change marker icon color) This would be a single row per business with several columns the ajax call receives this data and then plots the marker on the map when the user clicks a marker an ajax call is sent to a CodeIgniter Controller the controller queries the database for the HTML and additional data based on business_id and if not, what are some better suggestions to this problem? In summary this means rather than including the HTML and additional data along for each business, only submitting minimal location information and then re-query for that information when each business marker is clicked. Potential Downsides longer load times when a user clicks a marker icon more code?? more queries to the database

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >