Search Results

Search found 4640 results on 186 pages for 'unique'.

Page 28/186 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Django, want to upload eather image (ImageField) or file (FileField)

    - by Serg
    I have a form in my html page, that prompts user to upload File or Image to the server. I want to be able to upload ether file or image. Let's say if user choose file, image should be null, and vice verso. Right now I can only upload both of them, without error. But If I choose to upload only one of them (let's say I choose image) I will get an error: "Key 'attachment' not found in <MultiValueDict: {u'image': [<InMemoryUploadedFile: police.jpg (image/jpeg)>]}>" models.py: #Description of the file class FileDescription(models.Model): TYPE_CHOICES = ( ('homework', 'Homework'), ('class', 'Class Papers'), ('random', 'Random Papers') ) subject = models.ForeignKey('Subjects', null=True, blank=True) subject_name = models.CharField(max_length=100, unique=False) category = models.CharField(max_length=100, unique=False, blank=True, null=True) file_type = models.CharField(max_length=100, choices=TYPE_CHOICES, unique=False) file_uploaded_by = models.CharField(max_length=100, unique=False) file_name = models.CharField(max_length=100, unique=False) file_description = models.TextField(unique=False, blank=True, null=True) file_creation_time = models.DateTimeField(editable=False) file_modified_time = models.DateTimeField() attachment = models.FileField(upload_to='files', blank=True, null=True, max_length=255) image = models.ImageField(upload_to='files', blank=True, null=True, max_length=255) def __unicode__(self): return u'%s' % (self.file_name) def get_fields(self): return [(field, field.value_to_string(self)) for field in FileDescription._meta.fields] def filename(self): return os.path.basename(self.image.name) def category_update(self): category = self.file_name return category def save(self, *args, **kwargs): if self.category is None: self.category = FileDescription.category_update(self) for field in self._meta.fields: if field.name == 'image' or field.name == 'attachment': field.upload_to = 'files/%s/%s/' % (self.file_uploaded_by, self.file_type) if not self.id: self.file_creation_time = datetime.now() self.file_modified_time = datetime.now() super(FileDescription, self).save(*args, **kwargs) forms.py class ContentForm(forms.ModelForm): file_name =forms.CharField(max_length=255, widget=forms.TextInput(attrs={'size':20})) file_description = forms.CharField(widget=forms.Textarea(attrs={'rows':4, 'cols':25})) class Meta: model = FileDescription exclude = ('subject', 'subject_name', 'file_uploaded_by', 'file_creation_time', 'file_modified_time', 'vote') def clean_file_name(self): name = self.cleaned_data['file_name'] # check the length of the file name if len(name) < 2: raise forms.ValidationError('File name is too short') # check if file with same name is already exists if FileDescription.objects.filter(file_name = name).exists(): raise forms.ValidationError('File with this name already exists') else: return name views.py if request.method == "POST": if "upload-b" in request.POST: form = ContentForm(request.POST, request.FILES, instance=subject_id) if form.is_valid(): # need to add some clean functions # handle_uploaded_file(request.FILES['attachment'], # request.user.username, # request.POST['file_type']) form.save() up_f = FileDescription.objects.get_or_create( subject=subject_id, subject_name=subject_name, category = request.POST['category'], file_type=request.POST['file_type'], file_uploaded_by = username, file_name=form.cleaned_data['file_name'], file_description=request.POST['file_description'], image = request.FILES['image'], attachment = request.FILES['attachment'], ) return HttpResponseRedirect(".")

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • how to remove duplicates but keep the same order?

    - by Ben Fossen
    I have a cell array in Matlab y = { 'd' 'f' 'a' 'g' 'g' 'a' 'w' 'h'} I use unique(y) to get rid of the duplicates but it rearranges the strings in alphabetica order >> unique(y) ans = 'a' 'd' 'f' 'g' 'h' 'w' Like this I want to remove the duplicates but keep the same order. I know I could write a function do do this but was wondering if there was a simpler way using unique to remove duplicates while keeping the same order just with the duplicates removed. I want it to return this >> unique(y) ans = 'd' 'f' 'a' 'g' 'w' 'h'

    Read the article

  • Remove duplicates and update IDs linked to nonduplicates

    - by colinr23
    I have two tables, tableA and tableB, linked through a locationID. TableA has descriptive survey info with each record from a different time (ie unique), while TableB has purely locational information. However, there are lots of duplicates in TableB, yet each has a unique locationID, which has an entry in TableA. I've found plenty posts about removing duplicates from TableB, but how can I update the locationIDs in TableA so they are linked to the unique locations in TableB once duplicates are removed... Help much appreciated!

    Read the article

  • Does the order of columns matter in a group by clause?

    - by Jeff Meatball Yang
    If I have two columns, one with very high cardinality and one with very low cardinality (unique # of values), does it matter in which order I group by? Here's an example: select dimensionName, dimensionCategory, sum(someFact) from SomeFact f join SomeDim d on f.dimensionKey = d.dimensionKey group by d.dimensionName, -- large number of unique values d.dimensionCategory -- small number of unique values Are there situations where it matters?

    Read the article

  • Best way to model map values in Grails?

    - by Mulone
    Hi guys, I have to implement map values in my Grails app. I have a class that can contain 0..N OsmTags, and the key is unique. In Java I would model this with a Map in each object, but I don't know how to map classes in Grails. So I defined this class: class OsmTag { /** OSM tag name, e.g. natural */ String key /** OSM tag value, e.g. park */ String value static constraints = { key blank:false, size:2..80,matches:/[\S]+/, unique:false value blank:false, size:1..250,matches:/[\S]+/, unique:false } } That works ok, but it's actually quite ugly because the tag key is not unique. Is there a better way to model this issue? Cheers

    Read the article

  • Oracle : Identifying duplicates in a table without index

    - by user288031
    When I try to create a unique index on a large table, I get a unique contraint error. The unique index in this case is a composite key of 4 columns. Is there an efficient way to identify the duplicates other than : select col1, col2, col3, col4, count() from Table1 group by col1, col2, col3, col4 having count() 1 The explain plan above shows full table scan with extremely high cost, and just want to find if there is another way. Thanks !

    Read the article

  • Why MySQL multiple-column index is overpopulated?

    - by actual
    Consider following MySQL table: CREATE TABLE `log` ( `what` enum('add', 'edit', 'remove') CHARACTER SET ascii COLLATE ascii_bin NOT NULL, `with` int(10) unsigned NOT NULL, KEY `with_what` (`with`,`what`) ) ENGINE=InnoDB; INSERT INTO `log` (`what`, `with`) VALUES ('add', 1), ('edit', 1), ('add', 2), ('remove', 2); As I understand, with_what index must have 2 unique entries on its first with level and 3 unique entries in what "subindex". But MySQL reports 4 unique entries for each level. In other words, number of unique elements for each level is always equal to number of rows in log table. Is that a bug, a feature or my misunderstanding?

    Read the article

  • oracle index for string column - does format of data affects quality of index?

    - by Jayan
    We have following type of "Unique ID" column for many tables in the database (Oracle). It is a string with following format <randomnumber>-<ascendingnumber>-<machinename> So we have some thing like this U1234-12345-NBBJD U1234-12346-NBBJD U1234-12347-NBBJD U1234-12348-NBBJD U1234-12349-NBBJD The UID value is unique, we have unique index on them. Does the following format is more efficient than above for index scans? NBBJD-U1234-12345 NBBJD-U1234-12346 NBBJD-U1234-12347 NBBJD-U1234-12348 NBBJD-U1234-12349

    Read the article

  • cakephp isUnique for 2 fields?

    - by jodeci
    I have a registration form in which users can fill in two email address (email1 & email2). Marketing's requirement is that they need to be unique (unique as in if we had 10 users, then there would be 10*2=20 unique email address). The system is already built on cakephp, so what I'd like to know is, is there something similar to the isUnique feature (unique in one field) that can do this right out of the box? Or am I doomed to code this myself? Thanks in advance.

    Read the article

  • Cleaner way to store to replace a scalar hash value with an array ref?

    - by user275455
    I am building a hash where the keys, associated with scalars, are not necessarily unique. I want the desired behavior to be that if the key is unique, the value is the scalar. If the key is not unique, I want the value to be an array reference of the scalars associated witht the key. Since the hash is built up iteratively, I don't know if the key is unique ahead of time. Right now, I am doing something like this: if(!defined($hash{$key})){ $hash{$key} = $val; } elseif(ref($hash{$key}) ne 'ARRAY'){ my @a; push(@a, $hash{$key}); push(@, $val); $hash{$key} = \@a; } else{ push(@{$hash{$key}}, $val); } Is there a simpler way to do this?

    Read the article

  • Index a mysql table of 3 integer fields

    - by Doori Bar
    I have a mysql table of 3 integer fields. None of the fields have a unique value - but the three of them combined are unique. When I query this table, I only search by the first field. Which approach is recommended for indexing such table? Having a multiple-field primary key on the 3 fields, or setting an index on the first field, which is not unique? Thanks, Doori Bar

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • C# tip: do not use “is” type, if you will need cast “as” later

    - by Michael Freidgeim
    We have a debate with one of my collegues, is it agood style to check, if the object of particular style, and then cast  as this  type. The perfect answer of Jon Skeet   and answers in Cast then check or check then cast? confirmed my point.//good    var coke = cola as CocaCola;    if (coke != null)    {        // some unique coca-cola only code    }    //worse    if (cola is CocaCola)    {        var coke =  cola as CocaCola;        // some unique coca-cola only code here.    }

    Read the article

  • Configuring JPA Primary key sequence generators

    - by pachunoori.vinay.kumar(at)oracle.com
    This article describes the JPA feature of generating and assigning the unique sequence numbers to JPA entity .This article provides information on jpa sequence generator annotations and its usage. UseCase Description Adding a new Employee to the organization using Employee form should assign unique employee Id. Following description provides the detailed steps to implement the generation of unique employee numbers using JPA generators feature Steps to configure JPA Generators 1.Generate Employee Entity using "Entities from Table Wizard". View image2.Create a Database Connection and select the table "Employee" for which entity will be generated and Finish the wizards with default selections. View image 3.Select the offline database sources-Schema-create a Sequence object or you can copy to offline db from online database connection. View image 4.Open the persistence.xml in application navigator and select the Entity "Employee" in structure view and select the tab "Generators" in flat editor. 5.In the Sequence Generator section,enter name of sequence "InvSeq" and select the sequence from drop down list created in step3. View image 6.Expand the Employees in structure view and select EmployeeId and select the "Primary Key Generation" tab.7.In the Generated value section,select the "Use Generated value" check box ,select the strategy as "Sequence" and select the Generator as "InvSeq" defined step 4. View image   Following annotations gets added for the JPA generator configured in JDeveloper for an entity To use a specific named sequence object (whether it is generated by schema generation or already exists in the database) you must define a sequence generator using a @SequenceGenerator annotation. Provide a unique label as the name for the sequence generator and refer the name in the @GeneratedValue annotation along with generation strategy  For  example,see the below Employee Entity sample code configured for sequence generation. EMPLOYEE_ID is the primary key and is configured for auto generation of sequence numbers. EMPLOYEE_SEQ is the sequence object exist in database.This sequence is configured for generating the sequence numbers and assign the value as primary key to Employee_id column in Employee table. @SequenceGenerator(name="InvSeq", sequenceName = "EMPLOYEE_SEQ")   @Entity public class Employee implements Serializable {    @Id    @Column(name="EMPLOYEE_ID", nullable = false)    @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="InvSeq")   private Long employeeId; }   @SequenceGenerator @GeneratedValue @SequenceGenerator - will define the sequence generator based on a  database sequence object Usage: @SequenceGenerator(name="SequenceGenerator", sequenceName = "EMPLOYEE_SEQ") @GeneratedValue - Will define the generation strategy and refers the sequence generator  Usage:     @GeneratedValue(strategy = GenerationType.SEQUENCE, generator="name of the Sequence generator defined in @SequenceGenerator")

    Read the article

  • Is it possible to have multiple sets of key columns in a table?

    - by Peter Larsson
    Filtered indexes is one of my new favorite things with SQL Server 2008. I am currently working on designing a new datawarehouse. There are two restrictions doing this It has to be fed from the old legacy system with both historical data and new data It has to be fed from the new business system with new data When we incorporate the new business system, we are going to do that for one market only. It means the old legacy business system still will produce new data for other markets (together with historical data for all markets) and the new business system produce new data to that one market only. Sounds interesting this far? To accomplish this I did a thorough research about the business requirements about the business intelligence needs. Then I went on to design the sucker. How does this relate to filtered indexes you ask? I'll give one example, the Stock transaction table. Well, the key columns for the old legacy system are different from the key columns from the new business system. The old legacy system has a key of 5 columns Movement date Movement time Product code Order number Sequence number within shipment And to all thing, I found out that the Movement Time column is not really a time. It starts out like a time HH:MM:SS but seconds are added for each delivery within the shipment, so a Movement Time can look like "12:11:68". The sequence number is ordered over the distributors for shipment. As I said, it is a legacy system. The new business system has one key column, the Movement DateTime (accuracy down to 100th of nanosecond). So how to deal with this? On thing would be to have two stock transaction tables, one for legacy system and one for the new business system. But that would lead to a maintenance overhead and using partitioned views for getting data out of the warehouse. Filtered index will be of a great use here. MovementDate DATETIME2(7) MovementTime CHAR(8) NULL ProductCode VARCHAR(15) NOT NULL OrderNumber VARCHAR(30) NULL SequenceNumber INT NULL The sequence number is not even used in the new system, so I created a clustered index for a new IDENTITY column to make a new identity column which can be shared by both systems. Then I created one unique filtered index for old system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Legacy (MovementDate, MovementTime, ProductCode, SequenceNumber) INCLUDE (OrderNumber, Col5, Col6, ... ) WHERE SequenceNumber IS NOT NULL And then I created a new unique filtered index for the new business system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Business (MovementDate) INCLUDE (ProductCode, OrderNumber, Col12, ... ) WHERE SequenceNumber IS NULL This way I can have multiple sets of key columns on same base table which is shared by both systems.

    Read the article

  • Theory Of A Weird Thought - Forms Submission

    - by user2738336
    In theory, if you were to open two computers that were perfectly synced together on a website that has a form. This form has fields where say for example the username has to be unique. Assuming both computers have the same information on the form, and in theory let's say that the submit button was pressed at the same time, and that these two computers have the exact same build and internet speed and the same response time from the server, whose information would be submitted to the database and whose information would be denied knowing the username field is unique.

    Read the article

  • Tracking based on URL referral?

    - by jeremycollins
    Hi, Users on my site are given unique URL's for me to then track how many people they have referred to my site. ie: http://www.example.com/FQ3DL (FQ3DL being the unique code/url) The first thing I'd like to do is when a user goes to that link, it displays the homepage http://www.example.com/ rather than a 404 error The second thing is, how would I track how many people have visited that URL? Only through Google analytics or is there another way to manage it? Thanks!

    Read the article

  • Making Use of Advanced Search Operators

    Search engines have set up extra tools referred to as advanced search operators to give professional users additionally more manage when searching. Advanced search operators are unique words that you simply can insert inside your search item in order to find unique sorts of details which a common search can not supply. Numerous of those operators produce handy tools for SEO professionals as well as other people who want really special details, or perhaps who prefer to control their search to very specific results.

    Read the article

  • Google analytics is counting way to much

    - by Luticka
    I have a website using Google analytics but it is counting way to much. To test this i was logging all entry's to my database with time and IP address. My result for one day was: Google analytics: Visits: 4078 Absolute Unique Visitors: 3758 My Database: Visits: 4182 Unique Visitors(Only by IP): 905 I use the tracking option "One domain with multiple subdomains" because the website is accessible both on www.example.com and example.com. I'm i missing something or what could be wrong?

    Read the article

  • shared transaction ID function among multiple threads

    - by poly
    I'm writing an application in C that requires multiple threads to request a unique transaction ID from a function as shown below; struct list{ int id; struct list *next }; function generate_id() { linked-list is built here to hold 10 millions } my concern is how to sync between two or more threads so that transaction id can be unique among them without using mutex, is this possible? Please share anything even if I need to change linked list to something else.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >