Search Results

Search found 43986 results on 1760 pages for 'sql session state'.

Page 469/1760 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Invalid argument supplied for foreach() using adldap

    - by Brad
    I am using adldap http://adldap.sourceforge.net/ And I am passing the session from page to page, and checking to make sure the username within the session is a member of a certain member group, for this example, it is the STAFF group. <?php ini_set('display_errors',1); error_reporting(E_ALL); require_once('/web/ee_web/include/adLDAP.php'); $adldap = new adLDAP(); session_start(); $group = "STAFF"; //$authUser = $adldap->authenticate($username, $password); $result=$adldap->user_groups($_SESSION['user_session']); foreach($result as $key=>$value) { switch($value) { case $group: print '<h3>'.$group.'</h3>'; break; default: print '<h3>Did not find specific value: '.$value.'</h3>'; } if($value == $group) { print 'for loop broke'; break; } } ?> It gives me the error: Warning: Invalid argument supplied for foreach() on line 15, which is this line of code: foreach($result as $key=$value) { When I uncomment the code $authUser = $adldap-authenticate($username, $password); and enter in the appropriate username and password, it works fine, but I shouldn't have to, since the session is valid, I just want to see if the username stored within the valid_session is apart of the STAFF group. Why would it be giving me that problem?

    Read the article

  • How to solve concurrency problems in ASP.NET Windows-Workflow and ActiveRecord/NHibernate?

    - by Famous Nerd
    I have found that ActiveRecord uses the Session-Scope object within the ASP.NET application and that if the web-site is read-write we can have a tug-o-war between the Workflow's own Data-Access SessionScope and that of the ASP.NET site. I would really like to have the WindowsWorkflow Runtime use the same object session as the web-site however, they have different lifetimes. Sometimes, a web-request may save a very simple piece of data which would execute quickly however, if the web-site kicks off a workflow process.. how can that workflow make data-modifications while still allowing the Appliaction_EndRequest to dispose the ASP.NET SessionScope ... it's like ownership of the SessionScope should be shared between the workflow runtime and the ASP.NET website. Manual Workflow Scheduler may be the Savior... if a workflow is synchronous and merely uses CallExternalMethod to interact with the Host then we could constrain all the data-access to the host.. then the sessionScope can exist once. This however, won't solve the problem of a delay activity... if this delay fires, we could need to update data... in this case we'd need an isolated Session Scope and concurrency may arise. This however, differs from SharePoint workflows where it seems that the SharePoint workflow can save data from the web and the workflow and that concurrency is handled through other means. Can anyone offer any suggestions on how to allow the workflow to manage data and play nice with ASP.NET web sites?

    Read the article

  • How should I avoid memoization causing bugs in Ruby?

    - by Andrew Grimm
    Is there a consensus on how to avoid memoization causing bugs due to mutable state? In this example, a cached result had its state mutated, and therefore gave the wrong result the second time it was called. class Greeter def initialize @greeting_cache = {} end def expensive_greeting_calculation(formality) case formality when :casual then "Hi" when :formal then "Hello" end end def greeting(formality) unless @greeting_cache.has_key?(formality) @greeting_cache[formality] = expensive_greeting_calculation(formality) end @greeting_cache[formality] end end def memoization_mutator greeter = Greeter.new first_person = "Bob" # Mildly contrived in this case, # but you could encounter this in more complex scenarios puts(greeter.greeting(:casual) << " " << first_person) # => Hi Bob second_person = "Sue" puts(greeter.greeting(:casual) << " " << second_person) # => Hi Bob Sue end memoization_mutator Approaches I can see to avoid this are: greeting could return a dup or clone of @greeting_cache[formality] greeting could freeze the result of @greeting_cache[formality]. That'd cause an exception to be raised when memoization_mutator appends strings to it. Check all code that uses the result of greeting to ensure none of it does any mutating of the string. Is there a consensus on the best approach? Is the only disadvantage of doing (1) or (2) decreased performance? (I also suspect freezing an object may not work fully if it has references to other objects) Side note: this problem doesn't affect the main application of memoization: as Fixnums are immutable, calculating Fibonacci sequences doesn't have problems with mutable state. :)

    Read the article

  • Flex 4 animation question

    - by Jerry
    Hello.. I am trying to do the move vertically animation on a button nested in the vertical layout. I am not sure if the Hgroup restricts the button moving vertically. Are there ways to go around it? Thanks for the helps. <s:states> <s:State name="default"/> <s:State name="addRecommend"/> <s:State name="seeOther"/> </s:states> AS: protected function add_clickHandler(event:MouseEvent):void { currentState="addRecommend"; addRecommendMove.play(); } <s:transitions> <s:Transition fromState="default" toState="addRecommend"> <s:Sequence id="addRecommendMove"> <s:Move yTo="50" target="{add}"/> // add button doesn't move at all </s:Sequence> </s:Transition> <s:Transition fromState="addRecommend" toState="seeOther"> <s:Sequence> <s:Move yBy="50" target="{seeOthers}"/> </s:Sequence> </s:Transition> </s:transitions> <s:layout> <s:VerticalLayout paddingTop="15" paddingRight="10" paddingLeft="7"/> </s:layout> <button id="add" click=add_clickHandler(event)/> <button id="seeOthers"/>

    Read the article

  • Multiple SessionFactories in Windows Service with NHibernate

    - by Rob Taylor
    Hi all, I have a Webapp which connects to 2 DBs (one core, the other is a logging DB). I must now create a Windows service which will use the same business logic/Data access DLLs. However when I try to reference 2 session factories in the Service App and call the factory.GetCurrentSession() method, I get the error message "No session bound to current context". Does anyone have a suggestion about how this can be done? public class StaticSessionManager { public static readonly ISessionFactory SessionFactory; public static readonly ISessionFactory LoggingSessionFactory; static StaticSessionManager() { string fileName = System.Configuration.ConfigurationSettings.AppSettings["DefaultNHihbernateConfigFile"]; string executingPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase); fileName = executingPath + "\\" + fileName; SessionFactory = cfg.Configure(fileName).BuildSessionFactory(); cfg = new Configuration(); fileName = System.Configuration.ConfigurationSettings.AppSettings["LoggingNHihbernateConfigFile"]; fileName = executingPath + "\\" + fileName; LoggingSessionFactory = cfg.Configure(fileName).BuildSessionFactory(); } } The configuration file has the setting: <property name="current_session_context_class">call</property> The service sets up the factories: private ISession _session = null; private ISession _loggingSession = null; private ISessionFactory _sessionFactory = StaticSessionManager.SessionFactory; private ISessionFactory _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; ... _sessionFactory = StaticSessionManager.SessionFactory; _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; _session = _sessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_session); _loggingSession = _loggingSessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_loggingSession); So finally, I try to call the correct factory by: ISession session = StaticSessionManager.SessionFactory.GetCurrentSession(); Can anyone suggest a better way to handle this? Thanks in advance! Rob

    Read the article

  • iPodMusicPlayer playbackState inaccurate after odd conditions

    - by Shane Costello
    I have a simple music player app that runs into a really weird problem. First of all, while playing music and in the locked state, I allow the user to double click the Home button and use the locked iPod music controls. I did notice however that while in the locked state, my app doesn't receive any of its registered notifications. For the most part, this is fine anyway. But, if the user is playing music for at least 15 minutes (I'm not sure why but any less and this problem doesn't occur) while in the locked state, and using some kind of headphone or aux jack, then unplugs the headphone/aux jack while the device is still playing music, the iPodMusicPlayer will auto-pause. Which is exactly what I would want it to do, but after this happens, when the user unlocks their device and gives focus to the app again, the iPodMusicPlayer's playbackState is inaccurate. - (IBAction)playPause:(id)sender { if ([musicPlayer playbackState] == MPMusicPlaybackStatePlaying) { [musicPlayer pause]; } else { [musicPlayer play]; } } where musicPlayer = [MPMusicPlayerController iPodMusicPlayer]. Under normal circumstances, this runs perfectly fine. But after these conditions, my breakpoint will hit the condition for MPMusicPlaybackStatePlaying while the music is paused, and vice versa. The only way I've been able to fix this is to either make a new selection of music or to terminate the app and reopen. I've tried tons of a workarounds to fix this problem programmatically, but nothing turns out a 100% bug free fix. Does anyone have any clue as to why this happens in the first place?

    Read the article

  • can bind successfully to the ldap server, but needs to know how to find user w/i AD

    - by Brad
    I create a login form to bind to the ldap server, if successful, it creates a session (which the user's username is stored within), then I go to another page that has session_start(); and it works fine. What I want to do now, is add code to test if that user is a member of a specific group. So in theory, this is what I want to do if(username session is valid) { search ldap for user -> get list of groups user is member of foreach(group they are member of) { switch(group) { case STAFF: print 'they are member of staff group'; $access = true; break; default: print 'not a member of STAFF group'; $access = false; break; } if(group == STAFF) { break; } } if($access == TRUE) { // you have access to the content on this page } else { // you do not have access to this page } } How do I do a ldap_search w/o binding? I don't want to keep asking for their password on each page, and I can't pass their password thru a session. Any help is appreciated.

    Read the article

  • Can I use php SESSIONS in spotify apps?

    - by user1593846
    since PHP can't be used in Spotify apps directly I am using it on my server only, but I am having problems getting the sessions to work, is it possible to use sessions even? I want to use it to see if a user is registered in my database or not so I am doing it like this: session_start(); $query="SELECT * FROM user WHERE facebookID = {$fbid}"; $result = mysql_query($query) or die(mysql_error()); $row = mysql_fetch_array($result); $num_results = mysql_num_rows($result); if ($num_results > 0) { echo"THERE IS A USER!"; $row['facebookID'] = $_SESSION[fid]; } else { echo"THERE IS NO USER!"; } the authentication to the server works fine but the session does not seem to be working, on another PHP page witch is being called after the above code has been called with some ajax code I simply wanted to try this: session_start(); if(isset($_SESSION['fid'])) { echo"SESSION EXIST"; } else { echo"NO SESSION"; } anyways is there a special way to do this in Spotify apps or do I have to do this in some other way?

    Read the article

  • LLBLGen Pro v3.0 with Entity Framework v4.0 (12m video)

    - by FransBouma
    Today I recorded a video in which I illustrate some of the database-first functionality available in LLBLGen Pro v3.0. LLBLGen Pro v3.0 also supports model-first functionality, which I hope to illustrate in an upcoming video. LLBLGen Pro v3.0 is currently in beta and is scheduled to RTM some time in May 2010. It supports the following frameworks out of the box, with more scheduled to follow in the coming year: LLBLGen Pro RTL (our own o/r mapper framework), Linq to Sql, NHibernate and Entity Framework (v1 and v4). The video I linked to below illustrates the creation of an entity model for Entity Framework v4, by reverse engineering the SQL Server 2008 example database 'AdventureWorks'. The following topics (among others) are included in the video: Abbreviation support (example: convert 'Qty' into 'Quantity' during name construction) Flexible, framework specific settings Attribute definitions for various elements (so no requirement for buddy-classes or messing with generated code or templates) Retrieval of relational model data from a database Reverse engineering of tables into entities, automatically placed in groups Auto-creation of inheritance hierarchies Refactoring of entity fields into Value Type Definitions (DDD) Mapping a Typed view onto a stored procedure resultset Creation of a Typed list (definition of a query with a projection) on a set of related entities Validation and correction of found inconsistencies and errors Generating code using one of the pre-defined presets Illustration of the code in vs.net 2010 It also gives a good overview of what it takes with LLBLGen Pro v3.0 to start from a new project, point it to a database, get an entity model, perform tweaks and validation and generate code which is ready to run. I am no video recording expert so there's no audio and some mouse movements might be a little too quickly. If that's the case, please pause the video. It's rather big (52MB). Click here to open the HTML page with the video (Flash). Opens in a new window. LLBLGen Pro v3.0 is currently in beta (available for v2.x customers) and scheduled to be released somewhere in May 2010.

    Read the article

  • SSIS code smell – Unused columns in the dataflow

    - by jamiet
    A code smell is defined on Wikipedia as being a “symptom in the source code of a program that possibly indicates a deeper problem”. It’s a term commonly used by our code-writing brethren to describe sub-optimal code but I think the term can be applied equally well to SSIS packages too as I shall now explain One of my pet hates about SSIS development is packages that throw warnings of the form: The output column "ColumnName" (1358) on output "OLE DB Source Output" (1289) and component "OLE_SRC Name" (1279) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.  The warning is fairly self-explanatory – any column that appears in the data flow but doesn’t get used will throw this warning when the data flow is executed. Its not the negligible performance degradation that they cause that bothers me though, it’s the clutter that they cause in your log file/table. Take a look at the following screenshot if you don’t believe me: There are 231409 such warnings in the system that I took this screenshot from, that is 231409 log records that should not be there. The most infuriating thing about this warning is that it is so easily avoidable; eliminating such columns is a very quick and easy thing to do in the SSIS Designer. The only problem I see is that the warnings don’t occur until you execute the package – it would be preferable for the designer to have an unobtrusive way of informing you of them as well. Anyway, I digress… I consider such warnings to be a code smell because, to me, they’re symptomatic of a lack of due care and attention; a lack of developer discipline if you will. What other code smells can you think of when building SSIS packages? If I get a good list in the comments maybe I’ll compile them into a later blog post. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLBeat Podcast – Episode 5 – Kevin Kline Talks With Me About SQL, Professional Development and Book Writin’

    - by SQLBeat
    I thought I would be a ball of intimated nerves when Kevin gladly agreed to speak with me on the podcast this past weekend.  After all, he is Kevin Kline of SQL in a Nutshell fame! As it turned out,  we had a comfortable and enlightening conversation on Apple MacBooks (is that what they are called?), our beginnings in the indistry, the Deep South, health care intiatives and 286′s. I almost pulled the plug when Kevin started down the Oracle path though, and for a moment he looked at me as if I was serious. As always on this podcast, it is all in good fun. The picture is of Kevin and I ( my shirt is mauve not pink by the way) at the after party for SQL Saturday 151 in Orlando, FL where he also did a Pre-Con to a sold out crowd of enthusiastic DBAs. I know they were enthusiastic even though I was not there because one of the attendees was a friend of mine who went on and on and on about the content, kind of like I am doing here.  So I will just stop that and let you proceed to listen. As always, I hope you enjoy and any feedback on this or future episodes is always welcome. Download the MP3

    Read the article

  • .Net Crystal Report printing application running on termianal service connection errors when session

    - by MrEdmundo
    I have created a .Net application to run on an App Server that gets requests for a report and prints out the requested report. The C# application uses Crystal Reports to load the report and subsequently print it out. The application is run on Server which is connected to via a Remote Desktop connection under a particular user account (required for old apps). When I disconnect from the Remote Session the application starts raising exceptions such as: Message: CrystalDecisions.Shared.CrystalReportsException: Load report failed This type of error is never raised when the Remote Session is active. The server running the app is running Windows Server 2003, my box which creates the connection is Windows XP. I appreciate this is fairly weird, however I cannot see any problem with the application deployment I have created. Does anyone know what could be cause this issue? EDIT: I bit the bullet and created the application as a windows service, obviously this doesn't take long I just wasn't convinced it would solve the problem. Anyway it doesn't!!! I have also tried removing the multi-thread code that was calling the print function asynchronously. I did this in order to simply the app and narrow down the reason it could fail. Anyway, this didn't improve the situation either! EDIT: The two errors I get are: System.Runtime.InteropServices.COMException (0x80000201): Invalid printer specified. at CrystalDecisions.ReportAppServer.Controllers.PrintOutputControllerClass.ModifyPrinterName(String newVal) at CrystalDecisions.CrystalReports.Engine.PrintOptions.set_PrinterName(String value) at Dsa.PrintServer.Service.Service.PrintCrystalReport(Report report) The printer isn't invalid, this is confirmed when 60 seconds later the time ticks and the report is printed successfully. And The request could not be submitted for background processing. at CrystalDecisions.ReportAppServer.Controllers.ReportSourceClass.GetLastPageNumber(RequestContext pRequestContext) at CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext reqContext) --- End of inner exception stack trace --- at CrystalDecisions.ReportAppServer.ConvertDotNetToErom.ThrowDotNetException(Exception e) at CrystalDecisions.ReportSource.EromReportSourceBase.GetLastPageNumber(ReportPageRequestContext reqContext) at CrystalDecisions.CrystalReports.Engine.FormatEngine.PrintToPrinter(Int32 nCopies, Boolean collated, Int32 startPageN, Int32 endPageN) at CrystalDecisions.CrystalReports.Engine.ReportDocument.PrintToPrinter(Int32 nCopies, Boolean collated, Int32 startPageN, Int32 endPageN) at Dsa.PrintServer.Service.Service.PrintCrystalReport(Report report) EDIT: I ran filemon to check if there were any access issue. At the point when the error occurs file mon reports Request: OPEN | Path: C:\windows\assembly\gac_msil\system\2.0.0.0__b77a5c561934e089\ws2_32.dll | Result: NOT FOUND | Other: Attributes Error

    Read the article

  • LLBLGen Pro v3.0 has been released!

    - by FransBouma
    After two years of hard work we released v3.0 of LLBLGen Pro today! V3.0 comes with a completely new designer which has been developed from the ground up for .NET 3.5 and higher. Below I'll briefly mention some highlights of this new release: Entity Framework (v1 & v4) support NHibernate support (hbm.xml mappings & FluentNHibernate mappings) Linq to SQL support Allows both Model first and Database first development, or a mixture of both .NET 4.0 support Model views Grouping of project elements Linq-based project search Value Type (DDD) support Multiple Database types in single project XML based project file Integrated template editor Relational Model Data management Flexible attribute declaration for code generation, no more buddy classes needed Fine-grained project validation Update / Create DDL SQL scripts Fast Text-DSL based Quick mode Powerful text-DSL based Quick Model functionality Per target framework extensible settings framework much much more... Of course we still support our own O/R mapper framework: LLBLGen Pro v3.0 Runtime framework as well, which was updated with some minor features and was upgraded to use the DbProviderFactory system. Please watch the videos of the designer (more to come very soon!) to see some aspects of the new designer in action. The full version comes with Algorithmia in sourcecode as well. Algorithmia is an algorithm library written for .NET 3.5 which powers the heart of the designer with a fine-grained undo/redo command framework, graph classes and much more. I'd like to thank all beta-testers, our support team and others who have helped us with this massive release. :)

    Read the article

  • FileNameColumnName property, Flat File Source Adapter : SSIS Nugget

    - by jamiet
    I saw a question on MSDN’s SSIS forum the other day that went something like this: I’m loading data into a table from a flat file but I want to be able to store the name of that file as well. Is there a way of doing that? I don’t want to come across as disrespecting those who took the time to reply but there was a few answers along the lines of “loop over the files using a For Each, store the file name in a variable yadda yadda yadda” when in fact there is a much much simpler way of accomplishing this; it just happens to be a little hidden away as I shall now explain! The Flat File Source Adapter has a property called FileNameColumnName which for some reason it isn’t exposed through the Flat File Source editor, it is however exposed via the Advanced Properties: You’ll see in the screenshot above that I have set FileNameColumnName=“Filename” (it doesn’t matter what name you use, anything except a non-zero string will work). What this will do is create a new column in our dataflow called “Filename” that contains, unsurprisingly, the name of the file from which the row was sourced. All very simple. This is particularly useful if you are extracting data from multiple files using the MultiFlatFile Connection Manager as it allows you to differentiate between data from each of the files as you can see in the following screenshot: So there you have it, the FileNameColumnName property; a little known secret of SSIS. I hope it proves to be useful to someone out there. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Calculating estimated data loss with Always on

    - by blakmk
    Ever wondered how calculate estimated data loss (time) for always on. The metric in the always on dashboard shows the metric quite nicely but there does seem to be a lack of documentation about where the metrics ---come from. Heres a script that calculates the data loss ( lag ) so you can set up alerts based on your DR SLA's:       WITH DR_CTE ( replica_server_name, database_name, last_commit_time) AS                 (                                 select ar.replica_server_name, database_name, rs.last_commit_time                                 from master.sys.dm_hadr_database_replica_states  rs                                 inner join master.sys.availability_replicas ar on rs.replica_id = ar.replica_id                                 inner join sys.dm_hadr_database_replica_cluster_states dcs on dcs.group_database_id = rs.group_database_id and rs.replica_id = dcs.replica_id                                 where replica_server_name != @@servername                 ) select ar.replica_server_name, dcs.database_name, rs.last_commit_time, DR_CTE.last_commit_time 'DR_commit_time', datediff(ss,  DR_CTE.last_commit_time, rs.last_commit_time) 'lag_in_seconds' from master.sys.dm_hadr_database_replica_states  rs inner join master.sys.availability_replicas ar on rs.replica_id = ar.replica_id inner join sys.dm_hadr_database_replica_cluster_states dcs on dcs.group_database_id = rs.group_database_id and rs.replica_id = dcs.replica_id inner join DR_CTE on DR_CTE.database_name = dcs.database_name where ar.replica_server_name = @@servername order by lag_in_seconds desc

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • Dynamic Unpivot : SSIS Nugget

    - by jamiet
    A question on the SSIS forum earlier today asked: I need to dynamically unpivot some set of columns in my source file. Every month there is one new column and its set of Values. I want to unpivot it without editing my SSIS packages that is deployed Let’s be clear about what we mean by Unpivot. It is a normalisation technique that basically converts columns into rows. By way of example it converts something like this: AccountCode Jan Feb Mar AC1 100.00 150.00 125.00 AC2 45.00 75.50 90.00 into something like this: AccountCode Month Amount AC1 Jan 100.00 AC1 Feb 150.00 AC1 Mar 125.00 AC2 Jan 45.00 AC2 Feb 75.50 AC2 Mar 90.00 The Unpivot transformation in SSIS is perfectly capable of carrying out the operation defined in this example however in the case outlined in the aforementioned forum thread the problem was a little bit different. I interpreted it to mean that the number of columns could change and in that scenario the Unpivot transformation (and indeed the SSIS dataflow in general) is rendered useless because it expects that the number of columns will not change from what is specified at design-time. There is a workaround however. Assuming all of the columns that CAN exist will appear at the end of the rows, we can (1) import all of the columns in the file as just a single column, (2) use a script component to loop over all the values in that “column” and (3) output each one as a column all of its own. Let’s go over that in a bit more detail.   I’ve prepared a data file that shows some data that we want to unpivot which shows some customers and their mythical shopping lists (it has column names in the first row): We use a Flat File Connection Manager to specify the format of our data file to SSIS: and a Flat File Source Adapter to put it into the dataflow (no need a for a screenshot of that one – its very basic). Notice that the values that we want to unpivot all exist in a column called [Groceries]. Now onto the script component where the real work goes on, although the code is pretty simple: Here I show a screenshot of this executing along with some data viewers. As you can see we have successfully pulled out all of the values into a row all of their own thus accomplishing the Dynamic Unpivot that the forum poster was after. If you want to run the demo for yourself then I have uploaded the demo package and source file up to my SkyDrive: http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100529/Dynamic%20Unpivot.zip Simply extract the two files into a folder, make sure the Connection Manager is pointing to the file, and execute! Hope this is useful. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Setting a Visual State from a data bound enum in WPF

    - by firoso
    Hey all, I've got a scenario where I want to switch the visiblity of 4 different content controls. The visual states I have set opacity, and collapsed based on each given state (See code.) What I'd like to do is have the visual state bound to a property of my View Model of type Enum. I tried using DataStateBehavior, but it requires true/false, which doesn't work for me. So I tried DataStateSwitchBehavior, which seems to be totally broken for WPF4 from what I could tell. Is there a better way to be doing this? I'm really open to different approaches if need be, but I'd really like to keep this enum in the equation. Edit: The code shouldn't be too important, I just need to know if there's a well known solution to this problem. <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:Custom="http://schemas.microsoft.com/expression/2010/interactivity" xmlns:ei="http://schemas.microsoft.com/expression/2010/interactions" xmlns:ee="http://schemas.microsoft.com/expression/2010/effects" xmlns:customBehaviors="clr-namespace:SEL.MfgTestDev.ESS.Behaviors" x:Class="SEL.MfgTestDev.ESS.View.PresenterControl" mc:Ignorable="d" d:DesignHeight="624" d:DesignWidth="1104" d:DataContext="{Binding ApplicationViewModel, Mode=OneWay, Source={StaticResource Locator}}"> <Grid> <Grid.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Layout/TerminalViewTemplate.xaml"/> <ResourceDictionary Source="Layout/DebugViewTemplate.xaml"/> <ResourceDictionary Source="Layout/ProgressViewTemplate.xaml"/> <ResourceDictionary Source="Layout/LoadoutViewTemplate.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Grid.Resources> <Custom:Interaction.Behaviors> <customBehaviors:DataStateSwitchBehavior Binding="{Binding ApplicationViewState}"> <customBehaviors:DataStateSwitchCase State="LoadoutState" Value="Loadout"/> </customBehaviors:DataStateSwitchBehavior> </Custom:Interaction.Behaviors> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="ApplicationStates" ei:ExtendedVisualStateManager.UseFluidLayout="True"> <VisualStateGroup.Transitions> <VisualTransition GeneratedDuration="0:0:1"> <VisualTransition.GeneratedEasingFunction> <SineEase EasingMode="EaseInOut"/> </VisualTransition.GeneratedEasingFunction> <ei:ExtendedVisualStateManager.TransitionEffect> <ee:SmoothSwirlGridTransitionEffect/> </ei:ExtendedVisualStateManager.TransitionEffect> </VisualTransition> </VisualStateGroup.Transitions> <VisualState x:Name="LoadoutState"> <Storyboard> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="LoadoutPage"> <EasingDoubleKeyFrame KeyTime="0" Value="1"/> </DoubleAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="LoadoutPage"> <DiscreteObjectKeyFrame KeyTime="0" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="ProgressState"> <Storyboard> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="ProgressPage"> <EasingDoubleKeyFrame KeyTime="0" Value="1"/> </DoubleAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="ProgressPage"> <DiscreteObjectKeyFrame KeyTime="0" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="DebugState"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="DebugPage"> <DiscreteObjectKeyFrame KeyTime="0" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="DebugPage"> <EasingDoubleKeyFrame KeyTime="0" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="TerminalState"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="TerminalPage"> <DiscreteObjectKeyFrame KeyTime="0" Value="{x:Static Visibility.Visible}"/> </ObjectAnimationUsingKeyFrames> <DoubleAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Opacity)" Storyboard.TargetName="TerminalPage"> <EasingDoubleKeyFrame KeyTime="0" Value="1"/> </DoubleAnimationUsingKeyFrames> </Storyboard> </VisualState> </VisualStateGroup> </VisualStateManager.VisualStateGroups> <ContentControl x:Name="LoadoutPage" ContentTemplate="{StaticResource LoadoutViewTemplate}" Opacity="0" Content="{Binding}" Visibility="Collapsed"/> <ContentControl x:Name="ProgressPage" ContentTemplate="{StaticResource ProgressViewTemplate}" Opacity="0" Content="{Binding}" Visibility="Collapsed"/> <ContentControl x:Name="DebugPage" ContentTemplate="{StaticResource DebugViewTemplate}" Opacity="0" Content="{Binding}" Visibility="Collapsed"/> <ContentControl x:Name="TerminalPage" ContentTemplate="{StaticResource TerminalViewTemplate}" Opacity="0" Content="{Binding}" Visibility="Collapsed"/> <TextBlock HorizontalAlignment="Left" TextWrapping="Wrap" VerticalAlignment="Top" Text="{Binding ApplicationViewState}"> <TextBlock.Background> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="Black" Offset="0"/> <GradientStop Color="White" Offset="1"/> </LinearGradientBrush> </TextBlock.Background> </TextBlock> </Grid>

    Read the article

  • How to list column headers of a SQL Server table using sp_help perhaps?

    - by Hamish Grubijan
    Hi, I have a few tables with 70-80 columns in them. I would like to populate them with somewhat random data, unless I will not be able to do so due to key violation, etc. The first step would be simply to get the list of all headers. There seem to be two ways: A) Run select * from table_of_interest; in MSFT SQL Server Management Studio 2008. Now, right-click the result and click "Copy With headers". However, I get zero rows back, and when I try to copy nothing + headers, I get: TITLE: Microsoft SQL Server Management Studio ------------------------------ Value cannot be null. Parameter name: data (System.Windows.Forms) ------------------------------ BUTTONS: OK ------------------------------ This looks like a bug ... anyhow ... there is another way. B) I can run sp_help table_of_interest;. However, I end up getting too much back. I get 7 different tables back but I am only interested in the second one. The columns of the second table are: Column_name | Type | Computed | Length | Prec | Scale | Nullable | TrimTrailingBlanks | FixedLenNullInSource | Collation I might be interested in just a Column_name and Type, but maybe other columns. So ... since sp_help probably runs a bunch of queries ... how do I get under the hood? How can I run the second query AND filter down the number of columns that I am interested in? Many Thanks!

    Read the article

  • java.sql.SQLException: OALL8 is in an inconsistent state On weblogic 9

    - by user179056
    Hello, We are getting this error "java.sql.SQLException: OALL8 is in an inconsistent state " when executing our web app on weblogic 9. The jdk used is 1.5 and database is Oracle10.2g We have switched out oracle drivers ojdbc14.jar with ojdbc5.jar. We have also added orai18n.jar. We have ensured that this change of jar occurs with the web app library as well as other weblogic server classpaths where ojdbc14.jar existed. The problem persists Any pointers would help regards Sameer

    Read the article

  • How to: Check which table is the biggest, in SQL Server

    - by AngelEyes
    The company I work with had it's DB double its size lately, so I needed to find out which tables were the biggest. I found this on the web, and decided it's worth remembering! Taken from http://www.sqlteam.com/article/finding-the-biggest-tables-in-a-database, the code is from http://www.sqlteam.com/downloads/BigTables.sql   /************************************************************************************** * *  BigTables.sql *  Bill Graziano (SQLTeam.com) *  [email protected] *  v1.1 * **************************************************************************************/ DECLARE @id INT DECLARE @type CHARACTER(2) DECLARE @pages INT DECLARE @dbname SYSNAME DECLARE @dbsize DEC(15, 0) DECLARE @bytesperpage DEC(15, 0) DECLARE @pagesperMB DEC(15, 0) CREATE TABLE #spt_space   (      objid    INT NULL,      ROWS     INT NULL,      reserved DEC(15) NULL,      data     DEC(15) NULL,      indexp   DEC(15) NULL,      unused   DEC(15) NULL   ) SET nocount ON -- Create a cursor to loop through the user tables DECLARE c_tables CURSOR FOR   SELECT id   FROM   sysobjects   WHERE  xtype = 'U' OPEN c_tables FETCH NEXT FROM c_tables INTO @id WHILE @@FETCH_STATUS = 0   BEGIN       /* Code from sp_spaceused */       INSERT INTO #spt_space                   (objid,                    reserved)       SELECT objid = @id,              SUM(reserved)       FROM   sysindexes       WHERE  indid IN ( 0, 1, 255 )              AND id = @id       SELECT @pages = SUM(dpages)       FROM   sysindexes       WHERE  indid < 2              AND id = @id       SELECT @pages = @pages + Isnull(SUM(used), 0)       FROM   sysindexes       WHERE  indid = 255              AND id = @id       UPDATE #spt_space       SET    data = @pages       WHERE  objid = @id       /* index: sum(used) where indid in (0, 1, 255) - data */       UPDATE #spt_space       SET    indexp = (SELECT SUM(used)                        FROM   sysindexes                        WHERE  indid IN ( 0, 1, 255 )                               AND id = @id) - data       WHERE  objid = @id       /* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */       UPDATE #spt_space       SET    unused = reserved - (SELECT SUM(used)                                   FROM   sysindexes                                   WHERE  indid IN ( 0, 1, 255 )                                          AND id = @id)       WHERE  objid = @id       UPDATE #spt_space       SET    ROWS = i.ROWS       FROM   sysindexes i       WHERE  i.indid < 2              AND i.id = @id              AND objid = @id       FETCH NEXT FROM c_tables INTO @id   END SELECT TOP 25 table_name = (SELECT LEFT(name, 25)                             FROM   sysobjects                             WHERE  id = objid),               ROWS = CONVERT(CHAR(11), ROWS),               reserved_kb = Ltrim(Str(reserved * d.low / 1024., 15, 0) + ' ' + 'KB'),               data_kb = Ltrim(Str(data * d.low / 1024., 15, 0) + ' ' + 'KB'),               index_size_kb = Ltrim(Str(indexp * d.low / 1024., 15, 0) + ' ' + 'KB'),               unused_kb = Ltrim(Str(unused * d.low / 1024., 15, 0) + ' ' + 'KB') FROM   #spt_space,        MASTER.dbo.spt_values d WHERE  d.NUMBER = 1        AND d.TYPE = 'E' ORDER  BY reserved DESC DROP TABLE #spt_space CLOSE c_tables DEALLOCATE c_tables

    Read the article

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How I use schemas.

    - by Alexander Kuznetsov
    I use schemas to simplify granting permissions. For tables and views, I have three schemas: Data, the actual data my customers need. Can only be modified via sprocs. Staging, only visible to data loaders and devs. Full privileges on INSERT?UPDATE/DELETE for those who see it. Config, the configuration data used in loads, only visible to data loaders and devs. Can only be modified via sprocs. For sprocs/UDFs I have the following schemas: Readers Writers ETL ConfigReaders ConfigWriters Also I have dbo...(read more)

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >