Search Results

Search found 3604 results on 145 pages for 'alex smith'.

Page 17/145 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Sector bancario, un reto de transformación tecnológica

    - by Fabian Gradolph
    El sector financiero se encuentra en un momento clave. No sólo por la coyuntura económica actual, sino también por cuestiones estructurales y normativas que obligan a las entidades bancarias -normalmente a la cabeza de la innovación tecnológica, por cierto- a seguir dando pasos hacia el futuro, manteniendo la tecnología en el corazón de su estrategia de negocio. Así se ha puesto de manifiesto en el encuentro que se ha celebrado hoy en Madrid: Oracle in Banking, donde expertos de Oracle, clientes de la compañía y analistas han puesto sobre la mesa algunos de los desafíos a los que se enfrenta el sector e ideas para aprovechar al máximo la tecnología en la resolución de estos desafíos. El evento ha sido todo un éxito, con asistencia masiva de clientes y partners. En la imagen que ilustra este artículo pueden verse, por este orden: una panorámica de la sala, Modesto Villajos, Regional Sales Manager de Oracle, quien ejerció de maestro de ceremonias. Leopoldo Boado, Country Manager de Oracle España, quien realizó la introducción, Alex Kwiatkowski, de IDC, quien expuso los prinicipales desafíos a los que se enfrenta la banca, y Máximo Díez, Senior Director Financial Services de Oracle, que planteó las diferentes estrategias de transformación que pueden emprender los bancos. El evento se completó con intervenciones de clientes de Oracle (Banco Espírito Santo -BES- de Portugal; y BBVA, de España), y presentaciones y demostraciones técnicas.  De particular interés fue la intervención de Alex Kwiatkowski. De acuerdo con su punto de vista hay cuatro áreas esenciales a las que se enfrenta el sector. La primera de ellas es el marco regulatorio. El sector financiero está sometido a una constante presión normativa (probablemente acrecentada en estos tiempos de incertidumbre), no sólo a nivel nacional, sino también a nivel europeo y global. El cumplimiento exquisito de todas estas normas es esencial para el buen funcionamiento del sistema. La segunda área crítica es la necesidad de ofrecer una experiencia de usuario multicanal satisfactoria, de forma que se potencie la retención de clientes. A veces es difícil darnos cuenta, pero hoy en día nuestras interacciones con el banco han alcanzado una gran diversidad de canales (sucursal, ATM, Internet, banca telefónica, banca móvil...). Esto supone un permanente desafío tecnológico y de procesos para las entidades financieras. El tercer elemento crítico es el del incremento de la eficiencia de las operaciones, manteniendo los costes bajo control o incluso reduciéndolos aún más. Por último, las entidades bancarias tienen ante sí el reto de encontrar nuevas fuentes de ingresos, de forma que el foco deje de estar únicamente en la reducción de costes y la minimización de riesgos. Lo cierto es que en la actualidad, la atención principal se centra en estos dos puntos, pero como mencionó Alex Kwiatkowski "los CIO`s de los bancos se van a plantar en la mesa del CEO con la necesidad de realizar renovaciones completas de los sistemas de core banking y la necesidad de invertir en el desarrollo de nuevos canales". Máximo Díez también enfatizó esta necesidad en su presentación. Los bancos tienen la obligación de econtrar nuevas fórmulas para impulsar el crecimiento, pero la implementación de estrategias en este sentido presenta fuertes desafíos a causa de las limitaciones de los sistemas IT existentes. No hay duda de que se presenta un futuro muy interesante en el ámbito tecnológico para el sector financiero. Lo que Oracle puede hacer y ofrece a las entidades financieras puede encontrarse en este enlace: Financial Services.

    Read the article

  • Silverlight Cream for February 21, 2011 -- #1049

    - by Dave Campbell
    In this Issue: Rob Eisenberg(-2-), Gill Cleeren, Colin Eberhardt, Alex van Beek, Ishai Hachlili, Ollie Riches, Kevin Dockx, WindowsPhoneGeek(-2-), Jesse Liberty(-2-), and John Papa. Above the Fold: Silverlight: "Silverlight 4: Creating useful base classes for your views and viewmodels with PRISM 4" Alex van Beek WP7: "Google Sky on Windows Phone 7" Colin Eberhardt Shoutouts: My friends at SilverlightShow have their top 5 for last week posted: SilverlightShow for Feb 14 - 20, 2011 From SilverlightCream.com: Rob Eisenberg MVVMs Us with Caliburn.Micro! Rob Eisenberg chats with Carl and Richard on .NET Rocks episode 638 about Caliburn.Micro which takes Convention-over-Configuration further, utilizing naming conventions to handle a large number of data binding, validation and other action-based characteristics in your app. Two Caliburn Releases in One Day! Rob Eisenberg also announced that release candidates for both Caliburn 2.0 and Caliburn.Micro 1.0 are now available. Check out the docs and get the bits. Getting ready for Microsoft Silverlight Exam 70-506 (Part 6) Gill Cleeren has Part 6 of his series on getting ready for the Silverlight Exam up at SilverlightShow.... this time out, Gill is discussing app startup, localization, and using resource dictionaries, just to name a few things. Google Sky on Windows Phone 7 Colin Eberhardt has a very cool WP7 app described where he's using Google Sky as the tile source for Bing Maps, and then has a list of 110 Messier Objects.. interesting astronomical objects that you can look at... all with source! Silverlight 4: Creating useful base classes for your views and viewmodels with PRISM 4 Alex van Beek has some Prism4/Unity MVVM goodness up with this discussion of a login module using View and ViewModel base classes. Windows Phone 7 and WCF REST – Authentication Solutions Ishai Hachlili sent me this link to his post about WCF REST web service and authentication for WP7, and he offers up 2 solutions... from the looks of this, I'm also putting his blog on my watch list WP7Contrib: Isolated Storage Cache Provider Ollie Riches has a complete explanation and code example of using the IsolatedStorageCacheProvider in their WP7Contrib library. Using a ChannelFactory in Silverlight, part two: binary cows & new-born calves Kevin Dockx follows-up his post on Channel Factories with this part 2, expanding the knowledge-base into usin parameters and custom binding with binary encoding, both from reader suggestions. All about UriMapping in WP7 WindowsPhoneGeek has a post up about URI mappings in WP7 ... what it is, how to enable it in code behind or XAML, then using it either with a hyperlink button or via the NavigationService class... all with code. Passing WP7 Memory Consumption requirements with the Coding4Fun MemoryCounter tool WindowsPhoneGeek's latest is a tutorial on the use of the Memory Counter control from the Coding4Fun toolkit and WP7 Memory consumption. Getting Started With Linq Jesse Liberty gets into LINQ in his Episode 33 of his WP7 'From Scratch' series... looks like a good LINQ starting point, and he's going to be doing a series on it. Linq with Objects In his second post on LINQ, Jesse Liberty is looking at creating a Linq query against a collection of objects... always good stuff, Jesse! Silverlight TV Silverlight TV 62: The Silverlight 5 Triad Unplugged John Papa is joined by Sam George, Larry Olson, and Vijay Devetha (the Silverlight Triad) on this Silverlight TV episode 62 to discuss how the team works together, and hey... they're hiring! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Automated Error Reporting = More Robust Software

    - by Laila
    I would like to tell you how to revolutionize your software development process </marketing hyperbole> On a more serious note, we (Red Gate's .NET Development team) recently rolled a new tool into our development process which has made our lives dramatically easier AND improved the quality of our software, and I (& one of our developers, Alex Davies) just wanted to take a quick moment to share the love. I work with a development team that takes pride in what they ship, so we take software testing rather seriously. For every development project we run, we allocate at least one software tester for every two developers, and we never ship software without first shipping early access releases and betas to get user feedback. And therein lies the challenge -encouraging users to provide consistent, useful feedback is a headache, but without that feedback, improving the software is. tricky. Until fairly recently, we used the standard (if long-winded) approach of receiving bug reports of variable quality via email or through our support forums. If that didn't give us enough information to reproduce the problem - which was most of the time - we had to enter into a time-consuming to-and-fro conversation with the end-user, to get scrape together the data we needed to work out where the problem lay. As I'm sure you're aware, this is painfully slow. To the delight of the team, we recently got to work with SmartAssembly, which lets us embed automated exception and error reporting into our software with very little pain, and we decided to do a little dogfooding. As a result, we've have made a really handy (if perhaps slightly obvious) discovery: As soon as we release a beta, or indeed any release of software, we now get tonnes of customer feedback through automated error reports. Making this process easier for our users has dramatically increased the amount (and quality) of feedback we get. From their point of view, they get an experience similar to Microsoft's error reporting, and process is essentially idiot-proof. From our side of things, we can now react much faster to the information we get, fixing the bugs and shipping a new-and-improved release, which our users rather appreciate. Smiles and hugs all round. Even more so because, as we're use SmartAssembly's Automated Error Reporting, we get to avoid having to spend weeks building an exception reporting mechanism. It takes just a few minutes to add reporting to a project, and we get a bunch of useful information back, like a stack trace and the values of all the local variables, which we can use to fix bugs. Happily, "Automated Error Reporting = More Robust Software" can actually be read two ways: we've found that we not only ship higher quality software, but we also release within a shorter time. We can ship stable software that our users are happy to upgrade to, and we then bask in the glory of lots of positive customer feedback. Once we'd starting working with SmartAssembly, we were curious to know how widespread error reporting was as a practice. Our product manager ran a survey in autumn last year, and found that 40% of software developers never really considered deploying error reporting. Considering how we've now got plenty of experience on the subject, one of our dev guys, Alex Davies, thought we should share what we've learnt, and he's kindly offered to host a webinar on delivering robust software with Automated Error Reporting. Drawing on our own in-house development experiences, he'll cover how to add error reporting to your program, how to actually use the error reports to fix bugs (don't snigger, not everyone's as bright as you), how to customize the error report dialog that your users see, and how to automatically get log files from your users' machine. The webinar will take place on Jan 25th (that's next week). It's free to attend, but you'll still need to register to hear Alex's dulcet tones.

    Read the article

  • Announcing the Winnipeg VS.NET 2012 Community Launch Event!

    - by D'Arcy Lussier
    Back in May 2010 the local Winnipeg technical community got together and put on a launch event for VS.NET 2010. That event was such a good time that we’re doing it again this year for the VS.NET 2012 launch! On December 6th, the Winnipeg .NET User Group is hosting a full day VS.NET 2012 Community Launch Event at the Imax theatre in Portage Place! We have 4 sessions planned covering dev tools, ALM/TFS, web development, and cloud development, presented by Dylan Smith, Tyler Doerksen, and myself. You can get all the details and register on our Eventbrite site: http://wpgvsnet2012launch.eventbrite.ca/ I’ve included the details below as well for convenience: Winnipeg VS.NET 2012 Community Launch Event Join us for a full day of sessions highlighting the new features and capabilities of Visual Studio .NET 2012 and the .NET 4.5 Framework! Hosted by the Winnipeg .NET User Group, this community event is FREE thanks to the generous support from our event sponsors: Imaginet Online Business Systems Prairie Developer Conference Event Details When: Thursday, Decemer 6th from 8:00 AM - 4:00 PM Where: IMAX Theatre, Portage Place Cost: *FREE!* Agenda 8:00 - 9:00 Continental Breakfast and Registration 9:00 - 9:15 Welcome 9:15 - 10:30 End-To-End Application Lifecycle Management with TFS 2012 10:30 - 10:45 Break 10:45 - 12:00 Improving Developer Productivity with Visual Studio 2012 12:00 - 1:00 Lunch Break (Lunch Not Provided) 1:00 - 2:15 Web Development in Visual Studio 2012 and .NET 4.5 2:15 - 2:30  Break 2:30 - 3:45 Microsoft Cloud Development with Azure and Visual Studio 2012 3:45 - 4:00 Prizes and Thanks Session Abstracts End-To-End Application Lifecycle Management with TFS 2012 Dylan Smith, Imaginet In this session we'll walk through the application development lifecycle from end-to-end and see how some of the new capabilities in TFS 2012 help streamline the software delivery process. There are some exciting new capabilities around Agile Project Management, Gathering Feedback, Code Reviews, Unit Testing, Version Control, Storyboarding, etc. During this session we’ll follow a fictional software development team through the process of planning, developing, testing, and deployment focusing on where the new functionality in VS/TFS 2012 fits in to make teams more effective. Improving Developer Productivity with Visual Studio 2012 Dylan Smith, Imaginet Microsoft Visual Studio 2012 enables developers to take full advantage of the capability of Windows using the skills and technologies developers already know and love to deliver exceptional and compelling apps.  Whether working individually or in a small, medium or large development team Visual Studio 2012 sets a new standard for development tools, helping teams deliver superior results for their customers that help set them apart from their competitors.  In this session we’ll walk through new features in Visual Studio 2012 specifically focusing on how these improve Developer Productivity. Web Development in Visual Studio 2012 and .NET 4.5 D’Arcy Lussier, Online Business Systems It’s an exciting time to be a web developer in the Microsoft ecosystem! The launch of Visual Studio 2012 and .NET 4.5 brings new tooling and features, and the ASP.NET team is continually releasing updates for MVC, SignalR, Web API, and other platform features. In this session we’ll take a tour of the new features and technologies available for Microsoft web developers here in 2012! Microsoft Cloud Development with Azure and Visual Studio 2012 Tyler Doerksen, Imaginet Microsoft’s public cloud platform is nearing its third year of public availability, supporting web site/service hosting, storage, relational databases, virtual machines, virtual networks and much more. Windows Azure provides both power and flexibility.  But to capture this power you need to have the right tools!  This session will demonstrate the primary ways you can harness Windows Azure with the .NET platform.  We’ll explain cloud service development, packaging, deployment, testing and show how Visual Studio 2012 with the Windows Azure SDK and other Microsoft tools can be used to develop for and manage Windows Azure.Harness the power of the cloud from the comfort of Visual Studio 2012!

    Read the article

  • Vaadin table hide columns and container customization

    - by Alex
    Hello I am testing a project, using Vaadin and Hibernate. I am trying to use the HbnContainer class to show data into table. The problem is that I do not want to show all the properties of the two classes in the table. For example: @Entity @Table(name="users") class User { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; private String name; @ManyToOne(cascade=CascadeType.PERSIST) private UserRole role; //getters and setters } and a second class: @Entity @Table(name="user_roles") class UserRole { @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; private String name; //getters and setters } Next, I retrieve my data using the HbnContainer, and connect it to the table: HbnContainer container = new HbnContainer(User.class, app); table.setContainerDataSource(container); The Table will only display the columns from User, and for the "role" it will put the role id instead. How can I hide that column, and replace it with the UserRole.name ? I managed to use a ColumnGenerator() to get the string value in the table, for the UserRole - but I couldn't remove the previous column, with the numerical value. What am I missing? Or, what is the best way to "customize" your data, before displaying a table (if i want to show data in a table from more than one object type.. what do I do?) If I can't find a simple solution soon, I think I will just build the tables "by hand".. So, any advice on this matter? Thank you, Alex

    Read the article

  • Design patter to keep track UITableView rows correspondance to underlying data in constant time.

    - by DenNukem
    When my model changes I want to animate changes in UITableView by inserting/deleting rows. For that I need to know the ordinal of the given row (so I can construct NSIndexPath), which I find hard to do in better-than-linear time. For example, consider that I have a list of addressbook entries which are manualy sorted by the user, i.e. there is no ordering "key" that represents the sort order. There is also a corresponding UITableView that shows one row per addressbook entry. When UITableView queries the datasource I query the NSMUtableArray populated with my entries and return required data in constant time for each row. However, if there is a change in underlying model I am getting a notification "Joe Smith, id#123 has been removed". Now I have a dilemma. A naive approach would be to scan the array, determine the index at which Joe Smith is and then ask UITableView to remove that precise row from the view, also removing it form the array. However, the scan will take linear time to finish. Now I could have an NSDictionary which allows me to find Joe Smith in constant time, but that doesn't do me a lot of good because I still need to find his ordinal index within the array in order to instruct UITableView to remove that row, which is again a linear search. I could further decide to store each object's ordinal inside the object itself to make it constant, but it will become outdated after first such update as all subsequent index values will have changed due to removal of an object. So what is the correct design pattern to accurately reflect model changes in the UITableView in costant (or at least logarithmic) time?

    Read the article

  • ASP.NET Ajax - Asynch request has separate session???

    - by Marcus King
    We are writing a search application that saves the search criteria to session state and executes the search inside of an asp.net updatepanel. Sometimes when we execute multiple searches successively the 2nd or 3rd search will sometimes return results from the first set of search criteria. Example: our first search we do a look up on "John Smith" - John Smith results are displayed. The second search we do a look up on "Bob Jones" - John Smith results are displayed. We save all of the search criteria in session state as I said, and read it from session state inside of the ajax request to format the DB query. When we put break points in VS everything behaves as normal, but without them we get the original search criteria and results. My guess is because they are saved in session, that the ajax request somehow gets its own session and saves the criteria to that, and then retrieves the criteria from that session every time, but the non-async stuff is able to see when the criteria is modified and saves the changes to state accordingly, but because they are from two different sessions there is a disparity in what is saved and read. EDIT::: To elaborate more, there was a suggestion of appending the search criteria to the query string which normally is good practice and I agree thats how it should be but following our requirements I don't see it as being viable. They want it so the user fills out the input controls hits search and there is no page reload, the only thing they see is a progress indicator on the page, and they still have the ability to navigate and use other features on the current page. If I were to add criteria to the query string I would have to do another request causing the whole page to load, which depending on the search criteria can take a really long time. This is why we are using an ajax call to perform the search and why we aren't causing another full page request..... I hope this clarifies the situation.

    Read the article

  • Python sorting list of dictionaries by multiple keys

    - by simi
    I have a list of dicts: b = [{u'TOT_PTS_Misc': u'Utley, Alex', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Russo, Brandon', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Chappell, Justin', u'Total_Points': 96.0}, {u'TOT_PTS_Misc': u'Foster, Toney', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Lawson, Roman', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Lempke, Sam', u'Total_Points': 80.0}, {u'TOT_PTS_Misc': u'Gnezda, Alex', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Kirks, Damien', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Worden, Tom', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Korecz, Mike', u'Total_Points': 78.0}, {u'TOT_PTS_Misc': u'Swartz, Brian', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Burgess, Randy', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Smugala, Ryan', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Harmon, Gary', u'Total_Points': 66.0}, {u'TOT_PTS_Misc': u'Blasinsky, Scott', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Carter III, Laymon', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Coleman, Johnathan', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Venditti, Nick', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Blackwell, Devon', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Kovach, Alex', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Bolden, Antonio', u'Total_Points': 60.0}, {u'TOT_PTS_Misc': u'Smith, Ryan', u'Total_Points': 60.0}] and I need to use a multi key sort reversed by Total_Points, then not reversed by TOT_PTS_Misc. This can be done at the command prompt like so: a = sorted(b, key=lambda d: (-d['Total_Points'], d['TOT_PTS_Misc'])) But I have to run this through a function, where I pass in the list and the sort keys. For example, def multikeysort(dict_list, sortkeys):. How can the lambda line be used which will sort the list, for an arbitrary number of keys that are passed in to the multikeysort function, and take into consideration that the sortkeys may have any number of keys and those that need reversed sorts will be identified with a '-' before it?

    Read the article

  • Getting Vars to bind properly across multiple files

    - by Alex Baranosky
    I am just learning Clojure and am having trouble moving my code into different files. I keep detting this error from the appnrunner.clj - Exception in thread "main" java.lang.Exception: Unable to resolve symbol: -run-application in this context It seems to be finding the namespaces fine, but then not seeing the Vars as being bound... Any idea how to fix this? Here's my code: APPLICATION RUNNER - (ns src/apprunner (:use src/functions)) (def input-files [(resource-path "a.txt") (resource-path "b.txt") (resource-path "c.txt")]) (def output-file (resource-path "output.txt")) (defn run-application [] (sort-files input-files output-file)) (-run-application) APPLICATION FUNCTIONS - (ns src/functions (:use clojure.contrib.duck-streams)) (defn flatten [x] (let [s? #(instance? clojure.lang.Sequential %)] (filter (complement s?) (tree-seq s? seq x)))) (defn resource-path [file] (str "C:/Users/Alex and Paula/Documents/SoftwareProjects/MyClojureApp/resources/" file)) (defn split2 [str delim] (seq (.split str delim))) (defstruct person :first-name :last-name) (defn read-file-content [file] (apply str (interpose "\n" (read-lines file)))) (defn person-from-line [line] (let [sections (split2 line " ")] (struct person (first sections) (second sections)))) (defn formatted-for-display [person] (str (:first-name person) (.toUpperCase " ") (:last-name person))) (defn sort-by-keys [struct-map keys] (sort-by #(vec (map % [keys])) struct-map)) (defn formatted-output [persons output-number] (let [heading (str "Output #" output-number "\n") sorted-persons-for-output (apply str (interpose "\n" (map formatted-for-display (sort-by-keys persons (:first-name :last-name)))))] (str heading sorted-persons-for-output))) (defn read-persons-from [file] (let [lines (read-lines file)] (map person-from-line lines))) (defn write-persons-to [file persons] (dotimes [i 3] (append-spit file (formatted-output persons (+ 1 i))))) (defn sort-files [input-files output-file] (let [persons (flatten (map read-persons-from input-files))] (write-persons-to output-file persons)))

    Read the article

  • Doctrine unsigned validation error storing created_at

    - by Alex Dean
    Hi, I'm having problems with the Timestampable functionality in Doctrine 1.2.2. The error I get on trying to save() my Record is: Uncaught exception 'Doctrine_Validator_Exception' with message 'Validation failed in class XXX 1 field had validation error: * 1 validator failed on created_at (unsigned) ' in ... I've created the relevant field in the MySQL table as: created_at DATETIME NOT NULL, Then in setTableDefinition() I have: $this->hasColumn('created_at', 'timestamp', null, array( 'type' => 'timestamp', 'fixed' => false, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, )); Which is taken straight from the output of generateModelsFromDb(). And finally my setUp() looks like: public function setUp() { parent::setUp(); $this->actAs('Timestampable', array( 'created' => array( 'name' => 'created_at', 'type' => 'timestamp', 'format' => 'Y-m-d H:i:s', 'disabled' => false, 'options' => array() ), 'updated' => array( 'disabled' => true ))); } (I've tried not defining all of those fields for 'created', but I get the same problem.) I'm a bit stumped as to what I'm doing wrong - for one thing I can't see why Doctrine would be running any unsigned checks against a 'timestamp' datatype... Any help gratefully received! Alex

    Read the article

  • C++ help with getline function with ifstream

    - by John
    So I am writing a program that deals with reading in and writing out to a file. I use the getline() function because some of the lines in the text file may contain multiple elements. I've never had a problem with getline until now. Here's what I got. The text file looks like this: John Smith // Client name 1234 Hollow Lane, Chicago, IL // Address 123-45-6789 // SSN Walmart // Employer 58000 // Income 2 // Number of accounts the client has 1111 // Account Number 2222 // Account Number ifstream inFile("ClientInfo.txt"); if(inFile.fail()) { cout << "Problem opening file."; } else { string name, address, ssn, employer; double income; int numOfAccount; getline(inFile, name); getline(inFile, address); // I'll stop here because I know this is where it fails. When I debugged this code, I found that name == "John", instead of name == "John Smith", and Address == "Smith" and so on. Am I doing something wrong. Any help would be much appreciated.

    Read the article

  • Changing FileNames using RegEx and Recursion

    - by yeahumok
    Hello I'm trying to rename files that my program lists as having "illegal characters" for a SharePoint file importation. The illegal characters I am referring to are: ~ # % & * {} / \ | : < ? - "" What i'm trying to do is recurse through the drive, gather up a list of filenames and then through Regular Expressions, pick out file names from a List and try to replace the invalid characters in the actual filenames themselves. Anybody have any idea how to do this? So far i have this: (please remember, i'm a complete n00b to this stuff) class Program { static void Main(string[] args) { string[] files = Directory.GetFiles(@"C:\Documents and Settings\bob.smith\Desktop\~Test Folder for [SharePoint] %testing", "*.*", SearchOption.AllDirectories); foreach (string file in files) { Console.Write(file + "\r\n"); } Console.WriteLine("Press any key to continue..."); Console.ReadKey(true); string pattern = " *[\\~#%&*{}/:<>?|\"-]+ *"; string replacement = " "; Regex regEx = new Regex(pattern); string[] fileDrive = Directory.GetFiles(@"C:\Documents and Settings\bob.smith\Desktop\~Test Folder for [SharePoint] %testing", "*.*", SearchOption.AllDirectories); StreamWriter sw = new StreamWriter(@"C:\Documents and Settings\bob.smith\Desktop\~Test Folder for [SharePoint] %testing\File_Renames.txt"); foreach(string fileNames in fileDrive) { string sanitized = regEx.Replace(fileNames, replacement); sw.Write(sanitized + "\r\n"); } sw.Close(); } } So what i need to figure out is how to recursively search for these invalid chars, replace them in the actual filename itself. Anybody have any ideas?

    Read the article

  • Java - Save video stream from Socket to File

    - by Alex
    I use my Android application for streaming video from phone camera to my PC Server and need to save them into file on HDD. So, file created and stream successfully saved, but the resulting file can not play with any video player (GOM, KMP, Windows Media Player, VLC etc.) - no picture, no sound, only playback errors. I tested my Android application into phone and may say that in this instance captured video successfully stored on phone SD card and after transfer it to PC played witout errors, so, my code is correct. In the end, I realized that the problem in the video container: data streamed from phone in MP4 format and stored in *.mp4 files on PC, and in this case, file may be incorrect for playback with video players. Can anyone suggest how to correctly save streaming video to a file? There is my code that process and store stream data (without errors handling to simplify): // getOutputMediaFile() returns a new File object DataInputStream in = new DataInputStream (server.getInputStream()); FileOutputStream videoFile = new FileOutputStream(getOutputMediaFile()); int len; byte buffer[] = new byte[8192]; while((len = in.read(buffer)) != -1) { videoFile.write(buffer, 0, len); } videoFile.close(); server.close(); Also, I would appreciate if someone will talk about the possible "pitfalls" in dealing with the conservation of media streams. Thank you, I hope for your help! Alex.

    Read the article

  • SQL Server, how to join a table in a "rotated" format (returning columns instead of rows)?

    - by Joshua Carmody
    Sorry for the lame title, my descriptive skills are poor today. In a nutshell, I have a query similar to the following: SELECT P.LAST_NAME, P.FIRST_NAME, D.DEMO_GROUP FROM PERSON P JOIN PERSON_DEMOGRAPHIC PD ON PD.PERSON_ID = P.PERSON_ID JOIN DEMOGRAPHIC D ON D.DEMOGRAPHIC_ID = PD.DEMOGRAPHIC_ID This returns output like this: LAST_NAME FIRST_NAME DEMO_GROUP --------------------------------------------- Johnson Bob Male Smith Jane Female Smith Jane Teacher Beeblebrox Zaphod Male Beeblebrox Zaphod Alien Beeblebrox Zaphid Politician I would prefer the output be similar to the following: LAST_NAME FIRST_NAME Male Female Teacher Alien Politician --------------------------------------------------------------------------------------------------------- Johnson Bob 1 0 0 0 0 Smith Jane 0 1 1 0 0 Beeblebrox Zaphod 1 0 0 1 1 The number of rows in the DEMOGRAPHIC table varies, so I can't say with certainty how many columns I need. The query needs to be flexible. Yes, it would be trivial to do this in code. But this query is one piece of a complicated set of stored procedures, views, and reporting services, many of which are outside my sphere of influence. I need to produce this output inside the database to avoid breaking the system. Any ideas? This is MS SQL Server 2005, by the way. Thanks.

    Read the article

  • Filter entities that match all pairs

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there?

    Read the article

  • MYSQL Data -> PHP arrays -> Javascript (or Jquery) How can I pass the data with JSON ?

    - by Alex
    I am starting with a new site (it's my first one) and I am getting big troubles ! I wrote this code <?php include("misc.inc"); $cxn=mysqli_connect($host,$user,$password,$database) or die("couldn't connect to server"); $query="SELECT DISTINCT country FROM stamps"; $result=mysqli_query($cxn,$query) or die ("couldn't execute query"); $numberOfRows=mysqli_num_rows($result); for ($i=0;$i<$numberOfRows;$i++){ $row=mysqli_fetch_assoc($result); extract($row); $a=json_encode($row); $a=$a.","; echo $a; } ?> and the output is as follows : {"country":"liechtenstein"},{"country":"romania"},{"country":"jugoslavia"},{"country":"polonia"}, which should be a correct JSON outout ... How can I get it now in Jquery ? I tried with $.getJSON but I am not able to fuse it properly. I don't want yet to pass the data to a DIV or something similar in HTML. Alex

    Read the article

  • Apache local configuration to resolve files correctly

    - by Alex E.
    Hello, I am new at this so bare with me. I have just configured Apache and PHP to work on my local Mac OS X computer. Now PHP works fine, except when I try to load the files for my live sites. The live sites have separate directories and are sorted by client name etc. I've created symlinks in the default root for the local web server documents. My issue is that Apache doesn't seem to want to load any of the relative paths that are found in the HTML pages. For example, I have src="/css/main.css" but Apache doesn't load the file, similarly for images, it just resolves as a file not found 404 error. I then thought it might be the symlinks so I copied the full directory into the Apache document root, and still had the same result. I would really love to setup my local development environment to run Apache, PHP, MySQL to develop locally then publish when ready. I also tried the MAMP installation, and had the same issues. Any help at all in this would be greatly appreciated. If my explanation wasn't clear please let me know. Thanks! Alex.

    Read the article

  • Can't catch KEY_VALUE_BASIC_INFORMATION.Name in CmRegisterCallback

    - by alex
    I want to hide in registry name of key value. I write driver, that using CmRegisterCallback. But I can't catch name of key value that I need. When I DbgPrint PKEY_VALUE_BASIC_INFORMATION-Name I get only symbols [ , u . Where is my mistake? Can anybody help me?My RegistryCallback source: NTSTATUS RegistryCallback(PVOID CallbackContext, PVOID Argument1, PVOID Argument2) { PDEVICE_CONTEXT pContext = (PDEVICE_CONTEXT) CallbackContext; REG_NOTIFY_CLASS Action = (REG_NOTIFY_CLASS) Argument1; UNICODE_STRING regKeyNameValueToHide = {0}; try { switch (Action) { case RegNtEnumerateValueKey: { PREG_ENUMERATE_VALUE_KEY_INFORMATION pInfo = (PREG_ENUMERATE_VALUE_KEY_INFORMATION) Argument2; //DbgPrint(pInfo->ValueName->Buffer); RtlInitUnicodeString(&regKeyNameValueToHide,L"alex-56328943333"); if(pInfo->KeyValueInformationClass == KeyValueBasicInformation) { PKEY_VALUE_BASIC_INFORMATION pKeyValueBasicInfirmation = (PKEY_VALUE_BASIC_INFORMATION) pInfo->KeyValueInformation; UNICODE_STRING regKeyNameValue = {0}; RtlInitUnicodeString(&regKeyNameValue,pKeyValueBasicInfirmation->Name); if (RtlEqualUnicodeString(&regKeyNameValue, &regKeyNameValueToHide, 1)) { return STATUS_CALLBACK_BYPASS; } } else if(pInfo->KeyValueInformationClass == KeyValueFullInformation) { PKEY_VALUE_FULL_INFORMATION pKeyValueFullInfirmation = (PKEY_VALUE_FULL_INFORMATION) pInfo->KeyValueInformation; UNICODE_STRING regKeyNameValue = {0}; RtlInitUnicodeString(&regKeyNameValue,pKeyValueFullInfirmation->Name); if (RtlEqualUnicodeString(&regKeyNameValue, &regKeyNameValueToHide, 1)) { return STATUS_CALLBACK_BYPASS; } } break; } default: { return STATUS_SUCCESS break; } } } except (EXCEPTION_EXECUTE_HANDLER) { DbgPrint("Exception in RegistryCallback!!!"); } return STATUS_SUCCESS; }

    Read the article

  • nsxmlparser not solving &apos;

    - by alex
    Hi! Im using NSXMLParser to dissect a xml package, I'm receiving &apos inside the package text. I have the following defined for the xmlParser: [xmlParser setShouldResolveExternalEntities: YES]; The following method is never called - (void)parser:(NSXMLParser *)parser foundExternalEntityDeclarationWithName:(NSString *)entityName publicID:(NSString *)publicID systemID:(NSString *)systemID The text in the field before the &apos is not considered by the parser. Im searching how to solve this, any idea??? Thanks in advance Alex XML package portion attached: <?xml version="1.0" encoding="ISO-8859-1"?><SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:tns="urn:appwsdl"><SOAP-ENV:Body><ns1:getObjects2Response xmlns:ns1="http://schemas.xmlsoap.org/soap/envelope/"><return xsi:type="tns:objectsResult"><totalRecipes xsi:type="xsd:string">1574</totalObjects><Objects xsi:type="tns:Item"><id xsi:type="xsd:string">4311</id><name xsi:type="xsd:string"> item title 1 </name><procedure xsi:type="xsd:string">item procedure 11......

    Read the article

  • SOLR phrase query

    - by Alex
    I have a slight problem when searching with SOLR 4.0 and attempting a phrase query. I have a field called "idx_text_general_ci" which is a case insensitive (all lowercased) field made up of all fields. When I try and search for a phrase (marine fitter) my SOLR refuses to search for the phrase instead splitting the phrase into 2 words - /select?defType=edismax&q=idx_text_general_ci:marine%20fitter&debugQuery=true debugQuery=true output below: <lst name="debug"> <str name="rawquerystring">idx_text_general_ci:marine fitter</str> <str name="querystring">idx_text_general_ci:marine fitter</str> <str name="parsedquery"> (+(idx_text_general_ci:marine DisjunctionMaxQuery((id:fitter))))/no_coord </str> <str name="parsedquery_toString">+(idx_text_general_ci:marine (id:fitter))</str> As you can see above it splits the query into 2 parts (idx_text_general_ci:marine then id:fitter). THe problem I have is that I have an exact match for "marine fitter" that appears twice in the idx_text_general_ci field yet it's ranked with a lesser score than a document with the word "marine" appearing 3 times. I know this will not be the case if my SOLR was to search the field with the phrase as expected. If I wrap the phrase in quotes I get zero results. Any help or a nudge in the right direction would be much appreciated. Thanks in advance Alex

    Read the article

  • Destroying nested resources in restful way

    - by Alex
    I'm looking for help destroying a nested resource in Merb. My current method seems near correct, but the controller raise an InternalServerError during the destruction of the nested object. Here comes all the details concerning the request, don't hesitate to ask for more :) Thanks, Alex I'm trying to destroy a nested resources using the following route in router.resources :events, Orga::Events do |event| event.resources :locations, Orga::Locations end Which gives in jQuery request (delete_ method is a implementation of $.ajax with "DELETE"): $.delete_("/events/123/locations/456"); In the Location controller side, I've got: def delete(id) @location = Location.get(id) raise NotFound unless @location if @location.destroy redirect url(:orga_locations) else raise InternalServerError end end And the log: merb : worker (port 4000) ~ Routed to: {"format"=>nil, "event_id"=>"123", "action"=>"destroy", "id"=>"456", "controller"=>"letsmotiv/locations"} merb : worker (port 4000) ~ Params: {"format"=>nil, "event_id"=>"123", "action"=>"destroy", "id"=>"456", "controller"=>"letsmotiv/locations"} ~ (0.000025) SELECT `id`, `class_type`, `name`, `prefix`, `type`, `capacity`, `handicap`, `export_name` FROM `entities` WHERE (`class_type` IN ('Location') AND `id` = 456) ORDER BY `id` LIMIT 1 ~ (0.000014) SELECT `id`, `streetname`, `phone`, `lat`, `lng`, `country_region_city_id`, `location_id`, `organisation_id` FROM `country_region_city_addresses` WHERE `location_id` = 456 ORDER BY `id` LIMIT 1 merb : worker (port 4000) ~ Merb::ControllerExceptions::InternalServerError - (Merb::ControllerExceptions::InternalServerError)

    Read the article

  • Backwards compatibility when using Core Data

    - by Alex
    Could anybody shed some light as to why is my app crashing with the following error on iPhone OS 2.2.1 dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation I have weak linked CoreData.framework, and have the Base SDK set to 3.0 and Deployment Target set to SDK 2.2 The app already uses other 3.0 features when available and I did not have any problems with those. But apparently the backward-compatibility methods used for other features do not work with Core Data. The app crashes before app delegate's applicationDidFinishLaunching gets called. Here's the debugger log: [Session started at 2010-05-25 20:17:03 -0400.] GNU gdb 6.3.50-20050815 (Apple version gdb-1119) (Thu May 14 05:35:37 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 Loading program into debugger… sharedlibrary apply-load-rules all warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-12038-42 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 10755] [Switching to thread 10755] Re-enabling shared library breakpoint 1 Re-enabling shared library breakpoint 2 Re-enabling shared library breakpoint 3 Re-enabling shared library breakpoint 4 Re-enabling shared library breakpoint 5 (gdb) continue warning: Unable to read symbols for ""/Users/alex/iPhone Projects/AppName/build/Debug-iphoneos"/AppName.app/AppName" (file not found). dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation (gdb)

    Read the article

  • How can I 'transpose' my data using SQL and remove duplicates at the same time?

    - by Remnant
    I have the following data structure in my database: LastName FirstName CourseName John Day Pricing John Day Marketing John Day Finance Lisa Smith Marketing Lisa Smith Finance etc... The data shows employess within a business and which courses they have shown a preference to attend. The number of courses per employee will vary (i.e. as above, John has 3 courses and Lisa 2). I need to take this data from the database and pass it to a webpage view (asp.net mvc). I would like the data that comes out of my database to match the view as much as possible and want to transform the data using SQl so that it looks like the following: LastName FirstName Course1 Course2 Course3 John Day Pricing Marketing Finance Lisa Smith Marketing Finance Any thoughts on how this may be achieved? Note: one of the reasons I am trying this approach is that the original data structure does not easily lend itself to be iterated over using the typical mvc syntax: <% foreach (var item in Model.courseData) { %> Because of the duplication of names in the orignal data I would end up with lots of conditionals in my View which I would like to avoid. I have tried transforming the data using c# in my ViewModel but have found it tough going and feel that I could lighten the workload by leveraging SQL before I return the data. Thanks.

    Read the article

  • Using Excel as front end to Access database (with VBA)

    - by Alex
    I am building a small application for a friend and they'd like to be able to use Excel as the front end. (the UI will basically be userforms in Excel). They have a bunch of data in Excel that they would like to be able to query but I do not want to use excel as a database as I don't think it is fit for that purpose and am considering using Access. [BTW, I know Access has its shortcomings but there is zero budget available and Access already on friend's PC] To summarise, I am considering dumping a bunch of data into Access and then using Excel as a front end to query the database and display results in a userform style environment. Questions: How easy is it to link to Access from Excel using ADO / DAO? Is it quite limited in terms of functionality or can I get creative? Do I pay a performance penalty (vs.using forms in Access as the UI)? Assuming that the database will always be updated using ADO / DAO commands from within Excel VBA, does that mean I can have multiple Excel users using that one single Access database and not run into any concurrency issues etc.? Any other things I should be aware of? I have strong Excel VBA skills and think I can overcome Access VBA quite quickly but never really done Excel / Access link before. I could shoehorn the data into Excel and use as a quasi-database but that just seems more pain than it is worth (and not a robust long term solution) Any advice appreciated. Alex

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >