Search Results

Search found 22078 results on 884 pages for 'composite primary key'.

Page 638/884 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • Accessing Oracle 6i and 9i/10g Databases using C#

    - by Mike M
    Hi all, I am making two build files using NAnt. The first aims to automatically compile Oracle 6i forms and reports and the second aims to compile Oracle 9i/10g forms and reports. Within the NAnt task is a C# script which prompts the developer for database credentials (username, password, database) in order to compile the forms and reports. I want to then run these credentials against the relevant database to ensure the credentials entered are correct and, if they are not, prompt the user to re-enter their credentials. My script currently looks as follows: class GetInput { public static void ScriptMain(Project project) { Console.Clear(); Console.WriteLine("==================================================================="); Console.WriteLine("Welcome to the Compile and Deploy Oracle Forms and Reports Facility"); Console.WriteLine("==================================================================="); Console.WriteLine(); Console.WriteLine("Please enter the acronym of the project to work on from the following list:"); Console.WriteLine(); Console.WriteLine("--------"); Console.WriteLine("- BCS"); Console.WriteLine("- COPEN"); Console.WriteLine("- FCDD"); Console.WriteLine("--------"); Console.WriteLine(); Console.Write("Selection: "); project.Properties["project.type"] = Console.ReadLine(); Console.WriteLine(); Console.Write("Please enter username: "); string username = Console.ReadLine(); project.Properties["username"] = username; string password = ReturnPassword(); project.Properties["password"] = password Console.WriteLine(); Console.Write("Please enter database: "); string database = Console.ReadLine(); project.Properties["database"] = database Console.WriteLine(); //Call method to verify user credentials Console.WriteLine(); Console.WriteLine("Compiling files..."; } public static string ReturnPassword() { Console.Write("Please enter password: "); string password = ""; ConsoleKeyInfo nextKey = Console.ReadKey(true); while (nextKey.Key != ConsoleKey.Enter) { if (nextKey.Key == ConsoleKey.Backspace) { if (password.Length > 0) { password = password.Substring(0, password.Length - 1); Console.Write(nextKey.KeyChar); Console.Write(" "); Console.Write(nextKey.KeyChar); } } else { password += nextKey.KeyChar; Console.Write("*"); } nextKey = Console.ReadKey(true); } return password; } } Having done a bit of research, I find that you can connect to Oracle databases using the System.Data.OracleClient namespace clicky. However, as mentioned in the link, Microsoft is discontinuing support for this so it is not a desireable solution. I have also fonud that Oracle provides its own classes for connecting to Oracle databases clicky. However, this only seems to support connecting to Oracle 9 or newer databases (clicky) so it is not feasible solution as I also need to connect to Oracle 6i databases. I could achieve this by calling a bat script from within the C# script, but I would much prefer to have a single build file for simplicity. Ideally, I would like to run a series of commands such as is contained in the following .bat script: rem -- Set Database SID -- set ORACLE_SID=%DBSID% sqlplus -s %nameofuser%/%password%@%dbsid% set cmdsep on set cmdsep '"'; --" set term on set echo off set heading off select '========================================' || CHR(10) || 'Have checked and found both Password and ' || chr(10) || 'Database Identifier are valid, continuing ...' || CHR(10) || '========================================' from dual; exit; This requires me to set the environment variable of ORACLE_SID and then run sqlplus in silent mode (-s) followed by a series of sql set commands (set x), the actual select statement and an exit command. Can I achieve this within a c# script without calling a bat script, or am I forced to call a bat script? Thanks in advance!

    Read the article

  • How to Bind a selected Item in a Listbox to a ItemsControl and ItemTemplate in WPF and C#

    - by Scott
    All, LowDown: I am trying to create a Document Viewer in WPF. The layout is this: Left side is a full list box. On the right side is a Collection or an Items control. Inside the items control will be a collection of the "selected documents" in the list box. So A user can select multiple items in the list box and for each new item they select, they can add the item to the collection on the right. I want the collection to look like a image gallery that shows up in Google/Bing Image searches. Make sense? The problem I am having is I can't get the WPFPreviewer to bind correctly to the selected item in the list box under the itemscontrol. Side Note: The WPFPreviewer is something Micorosft puts out that allows us to preview documents. Other previewers can be built for all types of documents, but im going basic here until I get this working right. I have been successful in binding to the list box WITHOUT the items control here: <Window.Resources> <DataTemplate x:Key="listBoxTemplate"> <StackPanel Margin="3" > <DockPanel > <Image Source="{Binding IconURL}" Height="30"></Image> <TextBlock Text=" " /> <TextBlock x:Name="Title" Text="{Binding Title}" FontWeight="Bold" /> <TextBlock x:Name="URL" Visibility="Collapsed" Text="{Binding Url}"/> </DockPanel> </StackPanel> </DataTemplate> </Window.Resources><Grid Background="Cyan"> <ListBox HorizontalAlignment="Left" ItemTemplate="{StaticResource listBoxTemplate}" Width="200" AllowDrop="True" x:Name="lbDocuments" ItemsSource="{Binding Path=DocumentElements,ElementName=winDocument}" DragEnter="documentListBox_DragEnter" /> <l:WPFPreviewHandler Content="{Binding ElementName=lbDocuments, Path=SelectedItem.Url}"/> </Grid> Though, once I add in the ItemsControl, I can't get it to work anymore: <Window.Resources> <DataTemplate x:Key="listBoxTemplate"> <StackPanel Margin="3" > <DockPanel > <Image Source="{Binding IconURL}" Height="30"></Image> <TextBlock Text=" " /> <TextBlock x:Name="Title" Text="{Binding Title}" FontWeight="Bold" /> <TextBlock x:Name="URL" Visibility="Collapsed" Text="{Binding Url}"/> </DockPanel> </StackPanel> </DataTemplate> </Window.Resources> <Grid> <ListBox HorizontalAlignment="Left" ItemTemplate="{StaticResource listBoxTemplate}" Width="200" AllowDrop="True" x:Name="lbDocuments" ItemsSource="{Binding Path=DocumentElements,ElementName=winDocument}" DragEnter="documentListBox_DragEnter" /> <ItemsControl x:Name="DocumentViewer" ItemsSource="{Binding ElementName=lbDocuments, Path=SelectedItem.Url}" > <ItemsControl.ItemTemplate> <DataTemplate> <Grid Background="Cyan"> <l:WPFPreviewHandler Content="{Binding Url}"/> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Grid> Can someone please help me out with trying to bind to the ItemsControl if I select one or even multiple items in the listbox.

    Read the article

  • Changing the admin edit window display values

    - by Henri
    I have a database table with e.g. a weight value e.g. CREATE TABLE product ( id SERIAL NOT NULL, product_name item_name NOT NULL, . . weight NUMERIC(7,3), -- the weight in kg . . CONSTRAINT PK_product PRIMARY KEY (id) ); This results is the model: class Product(models.Model): . weight = models.DecimalField(max_digits=7, decimal_places=3, blank=True, null=True) . I store the weight in kg's, i.e. 1 kg is stores as 1, 0.1 kg or 100g is stored as 0.1 To make it easier for the user, I display the weight in the Admin list display in grams by specifying: def show_weight(self): if self.weight: weight_in_g = self.weight * 1000 return '%0f' % weight_in_g So if a product weighs e.g. 0.5 kg and is stored in the database as such, the admin list display shows 500 Is there also a way to alter the number shown in the 'Change product' window. This window now shows the value extracted from the database, i.e. 0.5. This will confuse a user when I tell him with the help_text to enter the number in g, while seeing the number of kgs. Before saving the product I override save as follows: def save(self): if self.weight: self.weight = self.weight / 1000 This converts the number entered in grams to kgs.

    Read the article

  • When *not* to use prepared statements?

    - by Ben Blank
    I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: The statements are all trivially simple. Most are in the form SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1. The most complex statement in the lot is simply three such selects joined together with UNION ALLs. Each page hit executes at most one statement and executes it only once. I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use PDO::MYSQL_ATTR_DIRECT_QUERY to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? EDIT: Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give rick his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after following everyone's advice. :-) Emulated prepared statements it is!

    Read the article

  • Should I learn VB.NET or C#?

    - by Ravi
    Background I have decided to do my graduation project (yet to start) in .NET. Regarding it, I am bit confused about: what language should I learn: VB.NET or C#? What I have learnt from those who know it that both VB.NET and C# have: The same concepts VB.NET is simpler as it is more like English statements but also C# is simple too if you already know C (Which I do know) Question So considering some factors, e.g. career point of view, newness, challenging and beneficial, etc., what language should I choose? Please help me out. And clearly do justify your answer (whatever reason you have.) References (Extra) A little information about what project I am doing: It is a database file system. Technologies I'll be using are SQL Server, WPF, etc. I just love the concept of Database file system.So those who want to know more about Database file system, here are the links DBFS (This one is really good.Serves as primary reference for me) Towards A Single Folder Filesystem stackoverflow-What is a database file system? UPDATE1 : After some really good explained answers (actually all are good at their place), I have finally decided to go with C# for myself. Thank you all. Still, you are requested to put your opinion (Once it is reopened,of course) UPDATE2 : Question reopened and made community wiki.Thank you all.

    Read the article

  • Spring / Hibernate / JUnit - No Hibernate Session bound to Thread

    - by Marty Pitt
    Hi I'm trying to access the current hibernate session in a test case, and getting the following error: org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here at org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63) at org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:574) I've clearly missed some sort of setup, but not sure what. Any help would be greatly appreciated. This is my first crack at Hibernate / Spring etc, and the learning curve is certainly steep! Regards Marty Code follows: The offending class: public class DbUnitUtil extends BaseDALTest { @Test public void exportDtd() throws Exception { Session session = sessionFactory.getCurrentSession(); session.beginTransaction(); Connection hsqldbConnection = session.connection(); IDatabaseConnection connection = new DatabaseConnection(hsqldbConnection); // write DTD file FlatDtdDataSet.write(connection.createDataSet(), new FileOutputStream("test.dtd")); } } Base class: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"classpath:applicationContext.xml"}) public class BaseDALTest extends AbstractJUnit4SpringContextTests { public BaseDALTest() { super(); } @Resource protected SessionFactory sessionFactory; } applicationContext.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName"> <value>org.hsqldb.jdbcDriver</value> </property> <property name="url"> <value>jdbc:hsqldb:mem:sample</value> </property> <property name="username"> <value>sa</value> </property> <property name="password"> <value></value> </property> </bean> <bean id="sessionFactory" class="com.foo.spring.AutoAnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="entityPackages"> <list> <value>com.sample.model</value> </list> </property> <property name="schemaUpdate"> <value>true</value> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect </prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> </beans>

    Read the article

  • Entity Framework 4 and SYSUTCDATETIME ()

    - by GIbboK
    Hi, I use EF4 and C#. I have a Table in my DataBase (MS SQL 2008) with a default value for a column SYSUTCDATETIME (). The Idea is to automatically add Date and Time as soon as a new record is Created. I create my Conceptual Model using EF4, and I have created an ASP.PAGE with a DetailsView Control in INSERT MODE. My problems: When I create a new Record. EF is not able to insert the actual Date and Time value but it inserts instead this value 0001-01-01 00:00:00.00. I suppose the EF is not able to use SYSUTCDATETIME () defined in my DataBase Any idea how to solve it? Thanks Here my SQL script CREATE TABLE dbo.CmsAdvertisers ( AdvertiserId int NOT NULL IDENTITY CONSTRAINT PK_CmsAdvertisers_AdvertiserId PRIMARY KEY, DateCreated dateTime2(2) NOT NULL CONSTRAINT DF_CmsAdvertisers_DateCreated DEFAULT sysutcdatetime (), ReferenceAdvertiser varchar(64) NOT NULL, NoteInternal nvarchar(256) NOT NULL CONSTRAINT DF_CmsAdvertisers_NoteInternal DEFAULT '' ); My Temporary solution: Please guys help me on this e.Values["DateCreated"] = DateTime.UtcNow; More info here: http://msdn.microsoft.com/en-us/library/bb387157.aspx How to use the default Entity Framework and default date values http://msdn.microsoft.com/en-us/library/dd296755.aspx

    Read the article

  • Slow MySQL Query not using filesort

    - by Canadaka
    I have a query on my homepage that is getting slower and slower as my database table grows larger. tablename = tweets_cache rows = 572,327 this is the query I'm currently using that is slow, over 5 seconds. SELECT * FROM tweets_cache t WHERE t.province='' AND t.mp='0' ORDER BY t.published DESC LIMIT 50; If I take out either the WHERE or the ORDER BY, then the query is super fast 0.016 seconds. I have the following indexes on the tweets_cache table. PRIMARY published mp category province author So i'm not sure why its not using the indexes since mp, provice and published all have indexes? Doing a profile of the query shows that its not using an index to sort the query and is using filesort which is really slow. possible_keys = mp,province Extra = Using where; Using filesort I tried adding a new multie-colum index with "profiles & mp". The explain shows that this new index listed under "possible_keys" and "key", but the query time is unchanged, still over 5 seconds. Here is a screenshot of the profiler info on the query. http://i355.photobucket.com/albums/r469/canadaka_bucket/slow_query_profile.png Something weird, I made a dump of my database to test on my local desktop so i don't screw up the live site. The same query on my local runs super fast, milliseconds. So I copied all the same mysql startup variables from the server to my local to make sure there wasn't some setting that might be causing this. But even after that the local query runs super fast, but the one on the live server is over 5 seconds. My database server is only using around 800MB of the 4GB it has available. here are the related my.ini settings i'm using default-storage-engine = MYISAM max_connections = 800 skip-locking key_buffer = 512M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 16M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 128M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 # Disable Federated by default skip-federated key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M

    Read the article

  • HTML form with multiple submit options

    - by phimuemue
    Hi, I'm trying to create a small web app that is used to remove items from a MySQL table. It just shows the items in a HTML table and for each item a button [delete]: item_1 [delete] item_2 [delete] ... item_N [delete] To achieve this, I dynamically generate the table via PHP into a HTML form. This form has then obviously N [delete]-buttons. The form should use the POST-method for transfering data. For the deletion I wanted to submit the ID (primary key in the MySQL table) of the corresponding item to the executing php skript. So I introduced hidden fields (all these fields have the name='ID' that store the ID of the corresponding item. However, when pressing an arbitrary [delete], it seems to submit always just the last ID (i.e. the value of the last ID hidden field). Is there any way to submit just the ID field of the corresponding item without using multiple forms? Or is it possible to submit data from multiple forms with just one submit-button? Or should I even choose any completly different way? The point why I want to do it in just one single form is that there are some "global" parameters that shall not be placed next to each item, but just once for the whole table.

    Read the article

  • Problem with relative path to image in XAML?

    - by Giri
    I am trying to reference a PNG file in my applications working directory through XAML with the following: <Image Name="contactImage"> <Image.Source> <BitmapImage UriSource="/Images/contact.png"> </Image.Source> </Image> Now in my code-behind I try to get the height of the image with contactImage.Source.Height This fails with System.IOException - cannot locate resource 'images/contact.png'. If I use something like PngBitmapDecoder p = new PngBitmapDecoder(new Uri("./Images/contact.png"), UriKind.Relative, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default); Everything is happy. How can I reference an image in xaml to a path relative to the working deirectory of the app. BTW- this is being run on a remote machine (if that makes a difference). I have tried "./Images/contact.png" and ".\Images\contact.png" and several other combinations of back/forward slashes and dots. Here is the primary difference- Any time the file is referenced in XAML, it shows up as pack://aplication:,,, blah blah blah when I use the PngBitmapDecoder, it shows up correctly as "./Images/contact.png". How do I reference the image file in XAML and get it show a source as "./Images/contact.png" instead of a pack://application,,,blah blah blah?

    Read the article

  • Match entities fulfilling filter (strict superset of search)

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. That is, given a set of Attributes A, I need to find all people that have a set of Attributes that are a superset of A. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there? I've been trying to do something like the following, given that the ValidValue is unique in all cases: select distinct p from Person p join p.personAttributes a where a.value IN (:values) Then I've tried putting my set of required values in as "values", but that gives me errors no matter how I try to structure that. I also have to get a little more complicated, as follows, but at this point I'd be happy with solving the first problem cleanly. However, if it's possible, the Attribute table actually has a field for default value: id | attr_name | default_value 1 | Sex | 1 2 | Eye Color | 5 If the value you're searching on happens to be the default value, I want it to return any people that have no explicit value set for that attribute, because in the application logic, that means they inherit the default value. Again, I'm more concerned about the primary question, but if someone who can help with that also has some idea of how to do this one, I'd be extremely grateful.

    Read the article

  • UUIDs in Rails3

    - by Rob Wilkerson
    I'm trying to setup my first Rails3 project and, early on, I'm running into problems with either uuidtools, my UUIDHelper or perhaps callbacks. I'm obviously trying to use UUIDs and (I think) I've set things up as described in Ariejan de Vroom's article. I've tried using the UUID as a primary key and also as simply a supplemental field, but it seems like the UUIDHelper is never being called. I've read many mentions of callbacks and/or helpers changing in Rails3, but I can't find any specifics that would tell me how to adjust. Here's my setup as it stands at this moment (there have been a few iterations): # migration class CreateImages < ActiveRecord::Migration def self.up create_table :images do |t| t.string :uuid, :limit => 36 t.string :title t.text :description t.timestamps end end ... end # lib/uuid_helper.rb require 'rubygems' require 'uuidtools' module UUIDHelper def before_create() self.uuid = UUID.timestamp_create.to_s end end # models/image.rb class Image < ActiveRecord::Base include UUIDHelper ... end Any insight would be much appreciated. Thanks.

    Read the article

  • From ASPX to WCF

    - by Barguast
    I'm hoping someone can advise me on how to solve my networking scenario. Both the client and server are to be C# / .NET based. I basically want to invoke some kind of web service from my client in order to retrieve both binary data (e.g. files) and serialised objects and lists of objects (e.g. database query results). At the moment, I'm using ASPX pages, using the query string to provide parameters and I get back either the binary data, or the binary data of the serialised messages. This affords me a lot of flexbility, and I can choose how to transmit the data, perform simulatanous requests, cancel ongoing requests, etc. Since I can control the serialised format, I can also deserialise lists of objects as they are received which is crucial. My problem isn't a problem as such, but this feels a little hack-ish and I can't help but wonder if there are better ways to go about it. I'm considering moving on to WCF or perhaps another technology to see if it helps. However, I need to know if it helps with my scenarios above that is; Can a WCF method return a list of objects, and can the client receive the items of this list as they arrive as opposed to getting the entire list on completion (i.e. streaming). Does anyone know of any examples of this? Am I likely to get any performance benefits from this? I don't know how well ASPX pages are tuned for this, as it surely isn't their primary purpose. Are there any other approaches I should consider? Thanks for your time spent reading this. I hope you can help.

    Read the article

  • How I shoud use BIT in MS SQL 2005

    - by adopilot
    Regarding to SQL performance. I have Scalar-Valued function for checking some specific condition in base, It returns BIT value True or False, I now do not know how I should fill @BIT parameter If I write. set @bit = convert(bit,1) or set @bit = 1 or set @bit='true' Function will work anyway but I do not know which method is recommended for daily use. Another Question, I have table in my base with around 4 million records, Daily insert is about 4K records in that table. Now I want to add CONSTRAINT on that table whit scalar valued function that I mentioned already Something like this ALTER TABLE fin_stavke ADD CONSTRAINT fin_stavke_knjizenje CHECK ( dbo.fn_ado_chk_fin(id)=convert(bit,1)) Where is filed "id" primary key of table fin_stavke and dbo.fn_ado_chk_fin looks like create FUNCTION fn_ado_chk_fin ( @stavka_id int ) RETURNS bit AS BEGIN declare @bit bit if exists (select * from fin_stavke where id=@stavka_id and doc_id is null and protocol_id is null) begin set @bit=0 end else begin set @bit=1 end return @bit; END GO Will this type and method of cheeking constraint will affect badly performance on my table and SQL at all ? If there is also better way to add control on this table please let me know.

    Read the article

  • Full Text Index type column is empty

    - by RemotecUk
    I am trying to create an index on a VarBinary(max) field in my SQL Server 2008 database. The steps I am taking are as follows: Table: dbo.Records Right click on table and select "Full Text Index" Then select "Define Index..." I choose the primary key which is the PK of my table (field name Id, type UniqueIndentifier). I then get the screen with the options Available Columns, Language for Word Breaker and Type Column I select my VarBinary(max) field called Chart as the Available Column by ticking the box. I select "English" as the Language for Word Breaker field. Then... I try to select the Type Column but there are no entries in here. I cannot proceed by clicking "Next" until this column is populated. Why are there no entries in this column for selection and what should be in there? Note 1: The VarBinary(max) field is linked to a file group if that makes any difference. Note 2: Also noticed that in the table designer I cannot set the full text option on that same field to "Yes" - its permanently stuck on "No". Thanks.

    Read the article

  • Optimizing MySql query to avoid using "Using filesort"

    - by usef_ksa
    I need your help to optimize the query to avoid using "Using filesort".The job of the query is to select all the articles that belongs to specific tag. The query is: "select title from tag,article where tag='Riyad' AND tag.article_id=article.id order by tag.article_id". the tables structure are the following: Tag table CREATE TABLE `tag` ( `tag` VARCHAR( 30 ) NOT NULL , `article_id` INT NOT NULL , INDEX ( `tag` ) ) ENGINE = MYISAM ; Article table CREATE TABLE `article` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `title` VARCHAR( 60 ) NOT NULL ) ENGINE = MYISAM Sample data INSERT INTO `article` VALUES (1, 'About Riyad'); INSERT INTO `article` VALUES (2, 'About Newyork'); INSERT INTO `article` VALUES (3, 'About Paris'); INSERT INTO `article` VALUES (4, 'About London'); INSERT INTO `tag` VALUES ('Riyad', 1); INSERT INTO `tag` VALUES ('Saudia', 1); INSERT INTO `tag` VALUES ('Newyork', 2); INSERT INTO `tag` VALUES ('USA', 2); INSERT INTO `tag` VALUES ('Paris', 3); INSERT INTO `tag` VALUES ('France', 3);

    Read the article

  • I deleted a full set of columns in a save, but have the original larger sheet, Can I get that back?

    - by Ben Henley
    I have an original sheet that has over 39000 lines in it. I knocked it down to 1800 lines that I want to improt into my database, however. The Dumb#$$ that I am, I selected only the visible cells and killed like 10 columns that I need. Is there a way to compare to the original sheet using a specific column... i.e. SKU and pull the data from the origianal to put back in the missing columns or do I have to just re-edit the whole thing down again. Please help as this takes a good day or two to minimize. Any and all help is much appreciated. Below is the column list on the edited sheet vs. the original sheet. SKU DESCRIPTION VENDOR PART # RETAIL UNIT CONVERSION RETAIL U/M RETAIL DEPARTMENT VENDOR NAME SELL PACK QUANTITY BREAK SELL PACK FLAG BLISH VENDOR # FINE LINE CLASS ITEM ACTION FLAG PRIMARY UPC STOCK U/M WEIGHT LENGTH WIDTH HEIGHT SHIP-VIA EXCLUSION HAZARDOUS CODE PRICE SUGGESTED RETAIL SKU DESCRIPTION RETAIL UNIT CONVERSION RETAIL U/M RETAIL DEPARTMENT VENDOR NAME SELL PACK QUANTITY BREAK SELL PACK FLAG FINE LINE CLASS HAZARDOUS CODE PRICE SUGGESTED RETAIL RETAIL SENSITIVITY CODE 2ND UPC CODE 3RD UPC CODE 4TH UPC CODE HEADLINE BULLET #1 BULLET #2 BULLET #3 BULLET #4 BULLET #5 BULLET #6 BULLET #7 BULLET #8 BULLET #9 SIZE COLOR CASE QUANTITY PRODUCT LINE Thanks, Ben

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • I am using relational division with EAV, but I need to find results in EAV that have some of the cat

    - by NewToDB
    I have two tables: CREATE TABLE EAV ( subscriber_id INT(1) NOT NULL DEFAULT '0', attribute_id CHAR(62) NOT NULL DEFAULT '', attribute_value CHAR(62) NOT NULL DEFAULT '', PRIMARY KEY (subscriber_id,attribute_id) ) INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'color','red') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'size','xl') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (1,'garment','shirt') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'color','red') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'size','xl') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (2,'garment','pants') INSERT INTO EAV (subscriber_id, attribute_id, attribute_value) VALUES (3,'garment','pants') CREATE TABLE CRITERIA ( attribute_id CHAR(62) NOT NULL DEFAULT '', attribute_value CHAR(62) NOT NULL DEFAULT '' ) INSERT INTO CRITERIA (attribute_id, attribute_value) VALUES ('color', 'red') INSERT INTO CRITERIA (attribute_id, attribute_value) VALUES ('size', 'xl') To find all subscribers in the EAV that match my criteria, I use relational division: SELECT DISTINCT(subscriber_id) FROM EAV WHERE subscriber_id IN (SELECT E.subscriber_id FROM EAV AS E JOIN CRITERIA AS CR ON E.attribute_id = CR.attribute_id AND E.attribute_value = CR.attribute_value GROUP BY E.subscriber_id HAVING COUNT() = (SELECT COUNT() FROM CRITERIA)) This gives me an unique list of subscribers who have all the criteria. So that means I get back subscriber 1 and 2 since they are looking for the color red and size xl, and that's exactly my criteria. But what if I want to extend this so that I also get subscriber 3 since this subscriber didn't specifically say what color or size they want (ie. there is no entry for attribute 'color' or 'size' in the EAV table for subscriber 3). Given my current design, is there a way I can extend my query to include subscribers that have zero or more of the attributes defined, and if they do have the attribute defined, then it must match the criteria? Or is there a better way to design the table to aid in querying?

    Read the article

  • Linking AS code to symbols defined in an external SWC?

    - by Ender
    (apologies ahead of time, I only really know Flash; my Flex experience is basically nil. There may be a very standard and obvious workflow solution that Flex people know about) I have a number of UI elements that are graphically quite complex (they're not components, they're just Sprites). Since it takes a long time to compile them, I've been trying to move them into an external .swc. However, I want to associate some code with these classes, but I don't want to have to recompile the graphical assets every time I make a code change. At the moment I have it set up like this: UI elements are created in a separate FLA and exported to a SWC. In my primary FLA, I have actionscript classes that extend each of the graphical assets in the SWC. For example: external.swc: (some symbol defined in the Library and exported for actionscript in frame 1) class: com.foo.WidgetGraphic base: flash.display.Sprite main.fla: Widget.as: package com.foo { public class Widget extends WidgetGraphic { ... } } This works, but is time-consuming and prone to error. I'd rather be able to avoid having to inherit from each graphical asset, and just define them directly. Is there a better way to do what I'm trying to accomplish? Note: the main concern here is compile time. I don't have any movies or audio or fonts, just a lot of vector art assets that appear to be slowing down my compilation time significantly. When I'm debugging I'm only making code changes, and would rather not have to keep recompiling the art...

    Read the article

  • django url user id versus userprofile id problem

    - by dana
    hello there, i have a mini comunity where each user can search and find another user profile. Userprofile is a class model, indexed differently compared to user model class (user id is not equal to userprofile id) But i cannot see a user profile by typing in the url the corresponding id. I only see the profile of the currently logged in user. Why is that? I'd also want to have in my url the username (a primary key of the user table also) and NOT the id (a number). The guilty part of the code is: what can i replace that request.user with so that it wil actually display the user i searched for, and not the currently logged in? def profile_view(request, id): u = UserProfile.objects.get(pk=id) cv = UserProfile.objects.filter(created_by = request.user) blog = New.objects.filter(created_by = request.user) return render_to_response('profile/publicProfile.html', { 'u':u, 'cv':cv, 'blog':blog, }, context_instance=RequestContext(request)) in urls (of the accounts app): url(r'^profile_view/(?P<id>\d+)/$', profile_view, name='profile_view'), and in template: <h3>Recent Entries:</h3> {% load pagination_tags %} {% autopaginate list 10 %} {% paginate %} {% for object in list %} <li>{{ object.post }} <br /> Voted: {{ vote.count }} times.<br /> {% for reply in object.reply_set.all %} {{ reply.reply }} <br /> {% endfor %} <a href=''> {{ object.created_by }}</a> <br /> {{object.date}} <br /> <a href = "/vote/save_vote/{{object.id}}/">Vote this</a> <a href="/replies/save_reply/{{object.id}}/">Comment</a> </li> {% endfor %} thanks in advance!

    Read the article

  • handling filename* parameters with spaces via RFC 5987 results in '+' in filenames

    - by Peter Friend
    I have some legacy code I am dealing with (so no I can't just use a URL with an encoded filename component) that allows a user to download a file from our website. Since our filenames are often in many different languages they are all stored as UTF-8. I wrote some code to handle the RFC5987 conversion to a proper filename* parameter. This works great until I have a filename with non-ascii characters and spaces. Per RFC, the space character is not part of attr_char so it gets encoded as %20. I have new versions of Chrome as well as Firefox and they are all converting to %20 to + on download. I have tried not encoding the space and putting the encoded filename in quotes and get the same result. I have sniffed the response coming from the server to verify that the servlet container wasn't mucking with my headers and they look correct to me. The RFC even has examples that contain %20. Am I missing something, or do all of these browsers have a bug related to this? Many thanks in advance. The code I use to encode the filename is below. Peter public static boolean bcsrch(final char[] chars, final char c) { final int len = chars.length; int base = 0; int last = len - 1; /* Last element in table */ int p; while (last >= base) { p = base + ((last - base) >> 1); if (c == chars[p]) return true; /* Key found */ else if (c < chars[p]) last = p - 1; else base = p + 1; } return false; /* Key not found */ } public static String rfc5987_encode(final String s) { final int len = s.length(); final StringBuilder sb = new StringBuilder(len << 1); final char[] digits = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'}; final char[] attr_char = {'!','#','$','&','\'','+','-','.','0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z','^','_','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z','|', '~'}; for (int i = 0; i < len; ++i) { final char c = s.charAt(i); if (bcsrch(attr_char, c)) sb.append(c); else { final char[] encoded = {'%', 0, 0}; encoded[1] = digits[0x0f & (c >>> 4)]; encoded[2] = digits[c & 0x0f]; sb.append(encoded); } } return sb.toString(); } Update Here is a screen shot of the download dialog I get for a file with Chinese characters with spaces as mentioned in my comment.

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • How should I manage my many-to-many relationships?

    - by wes
    Hello all, I have a database containing a couple tables: files and users. This relationship is many-to-many, so I also have a table called users_files_ref which holds foreign keys to both of the above tables. Here's the schema of each table: files - file_id, file_name users - user_id, user_name users_files_ref - user_file_ref_id, user_id, file_id I'm using Codeigniter to build a file host application, and I'm right in the middle of adding the functionality that enables users to upload files. This is where I'm running into my problem. Once I add a file to the files table, I will need that new file's id to update the users_files_ref table. Right now I'm adding the record to the files table, and then I imagined I'd run a query to grab the last file added, so that I can get the ID, and then use that ID to insert the new users_files_ref record. I know this will work on a small scale, but I imagine there is a better way of managing these records, especially in a heavy-traffic scenario. I am new to relational database stuff but have been around PHP for a while, so please bear with me here :-) I have primary and foreign keys set up correctly for the files, users, and users_files_ref tables, I'm just wondering how to manage the adding of file records for this scenario? Thanks for any help provided, it's much appreciated. -Wes

    Read the article

  • git workflow incorporating many, but not all commits from many forks

    - by becomingGuru
    I have a git repo. It has been forked several times and many independent commits are made on top of it. Everything normal, like what happens in many github hosted projects. Now, what exact workflow should I follow, if I want to see all that commits individually and apply the ones I like. The workflow I followed, which is not the optimal is to create a branch of the name github-username and merge the changes into my master and undo any changes in the commit I dont need manually (there are not many, so it worked). What I want is the ability to see all commits from different forks individually and cherry pick and apply them on top of my master. What is the workflow to follow for that? And what gui (gitk?) enables me to see all different individual commits. I realize that merge should be a primary part of the workflow and not cherry-pick as it creates a different commit (from git's point of view). Even rebasing other's changes on top of mine might not preserve the history on the graph to indicate that it is his commits I have rebased. So then, How do I ignore just a few commits from a lot of them? I think github should have a "apply this commit on top of my master" thing in their graph after each commit node; so I can just pull it, after doing all that.

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >