Search Results

Search found 19928 results on 798 pages for 'matt null'.

Page 183/798 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • How can I make TextToSpeech to speak a text with max volume and restore original volume after speak end?

    - by HelloCW
    I save the current volume both STREAM_RING and STREAM_MUSIC before sTts.get().speak(s, TextToSpeech.QUEUE_ADD, null), I hope the TextToSpeech can speak a text with max volume, but in fact I find the TextToSpeech speak the text with current volume, it seems that sTts.get().speak is asynchronous. How can I make TextToSpeech to speak a text with max volume and restore original volume after speak end? Thanks! public class SpeechTxt { private static SoftReference<TextToSpeech> sTts; public static void SpeakOut(final Context context, final String s) { final Context appContext = context.getApplicationContext(); if (sTts == null) { sTts = new SoftReference<TextToSpeech>(new TextToSpeech(appContext, new TextToSpeech.OnInitListener() { @Override public void onInit(int status) { if (status == TextToSpeech.SUCCESS) { speak(appContext, s); } else { } } })); } else { speak(appContext, s); } } private static void speak(Context context, String s) { if (sTts != null) { switch (sTts.get().setLanguage(Locale.getDefault())) { case TextToSpeech.LANG_COUNTRY_AVAILABLE: case TextToSpeech.LANG_COUNTRY_VAR_AVAILABLE: case TextToSpeech.LANG_AVAILABLE: { sTts.get().setPitch((float) 0.6); sTts.get().setSpeechRate((float) 0.8); int currentRing=PublicParFun.GetCurrentVol(context, AudioManager.STREAM_RING); int currentPlay=PublicParFun.GetCurrentVol(context, AudioManager.STREAM_MUSIC); PublicParFun.SetRingVol(context, 0); PublicParFun.SetPlayVol(context,1000000); sTts.get().speak(s, TextToSpeech.QUEUE_ADD, null); PublicParFun.SetRingVol(context, currentRing); PublicParFun.SetPlayVol(context,currentPlay); break; } case TextToSpeech.LANG_MISSING_DATA: { break; } case TextToSpeech.LANG_NOT_SUPPORTED: // not much to do here } } } public static int GetCurrentVol(Context myContext,int streamType){ AudioManager mAudioManager = (AudioManager)myContext.getSystemService(Context.AUDIO_SERVICE); int current = mAudioManager.getStreamVolume( streamType); return current; } public static void SetRingVol(Context myContext,int vol){ SetVol(myContext,AudioManager.STREAM_RING, vol); } public static void SetPlayVol(Context myContext,int vol){ SetVol(myContext,AudioManager.STREAM_MUSIC, vol); } private static void SetVol(Context myContext,int streamType,int vol){ AudioManager mAudioManager = (AudioManager)myContext.getSystemService(Context.AUDIO_SERVICE); int max = mAudioManager.getStreamMaxVolume(streamType); if (vol>max){ vol=max; } mAudioManager.setStreamVolume(streamType,vol, 0); } }

    Read the article

  • Lazy Loading Association and Casting

    - by Zuber
    I am using NHibernate 2.0.1 and .NET I am facing issues with Lazy loading an association I have a BusinessObject class that has associations to other BusinessObject in it, and it can go deeper. The following function is in the BusinessObject to read the values of a collection in the BusinessObject. public virtual object GetFieldValue(string fieldName) { var fieldItems = fieldName.Split(AppConstants.DotChar); var objectToRead = this; for (var i = 0; i < fieldItems.Length - 1; i++) { objectToRead = (BusinessObject) objectToRead.GetFieldValue(fieldItems[i]); } //if (objectToRead._data == null) return objectToRead.SystemId + " Error: _data was null"; return objectToRead.FieldValue(fieldName.LastItem()); } The FieldValue function is described below private object FieldValue(string fieldName) { return _data.Contains(fieldName) ? _data[fieldName] : null; } The BusinessObject has a dictionary_data which stores the field values. Assume the fieldName is BusinessDriver.Description and the BusinessObject which has this field is StrategyBusinessDriver This code breaks down the field name into two - BusinessDriver & Description. The first iteration reads the BusinessDriver object from StrategyBusinessDriver. It is cast into a BusinessObject type so that I can call the GetFieldValue again on it to read the next field i.e Description in the BusinessDriver. The problem is that when I read the BusinessDriver in the first iteration and cast it, I get the Ids and all other details of the BusinessObject but the field dictionary _data and other collections are not fetched. This should be fetched lazily when I read the _data of the BusinessObject. However, this does not happen and I get an error that _data is null. Is there something wrongly coded because of which the collection is not fetched lazily? Please ask for more clarifications if needed. Thanks in advance. UPDATE: It works when I don't do Lazy load.

    Read the article

  • How do I pass data from a BroadcastReceiver through to an Activity being started?

    - by Tom Hume
    I've got an Android application which needs to be woken up sporadically throughout the day. To do this, I'm using the AlarmManager to set up a PendingIntent and have this trigger a BroadcastReceiver. This BroadcastReceiver then starts an Activity to bring the UI to the foreground. All of the above seems to work, in that the Activity launches itself correctly; but I'd like the BroadcastReceiver to notify the Activity that it was started by the alarm (as opposed to being started by the user). To do this I'm trying, from the onReceive() method of the BroadcastReceiver to set a variable in the extras bundle of the intent, thus: Intent i = new Intent(context, MyActivity.class); i.putExtra(wakeupKey, true); i.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(i); In the onResume() method of my Activity, I then look for the existence of this boolean variable: protected void onResume() { super.onResume(); String wakeupKey = "blah"; if (getIntent()!=null && getIntent().getExtras()!=null) Log.d("app", "onResume at " + System.currentTimeMillis() + ":" + getIntent().getExtras().getBoolean(wakeupKey)); else Log.d("app", "onResume at " + System.currentTimeMillis() + ": null"); } The getIntent().getExtras() call in onResume() always returns null - I don't seem to be able to pass any extras through at all in this bundle. If I use the same method to bind extras to the PendingIntent which triggers the BroadcastReceiver however, the extras come through just fine. Can anyone tell me what's different about passing a bundle from a BroadcastReceiver to an Activity, as opposed to passing the bundle from an Activity to a BroadcastReceiver? I fear I may be doing something very very obvious wrong here...

    Read the article

  • MySQL select using datetime, group by date only

    - by Matt
    Is is possible to select a datetime field from a MySQL table and group by the date only? I'm trying to output a list of events that happen at multiple times, grouped by the date it happened on. My table/data looks like this: (the timestamp is a datetime field) 1. 2010-03-21 18:00:00 Event1 2. 2010-03-21 18:30:00 Event2 3. 2010-03-30 13:00:00 Event3 4. 2010-03-30 14:00:00 Event4 I want to output something like this: March 21st 1800 - Event 1 1830 - Event 2 March 30th 1300 - Event 3 1400 - Event 4 Thanks!

    Read the article

  • How does 'lazy' work?

    - by Matt Fenwick
    What is the difference between these two functions? I see that lazy is intended to be lazy, but I don't understand how that is accomplished. -- | Identity function. id :: a -> a id x = x -- | The call '(lazy e)' means the same as 'e', but 'lazy' has a -- magical strictness property: it is lazy in its first argument, -- even though its semantics is strict. lazy :: a -> a lazy x = x -- Implementation note: its strictness and unfolding are over-ridden -- by the definition in MkId.lhs; in both cases to nothing at all. -- That way, 'lazy' does not get inlined, and the strictness analyser -- sees it as lazy. Then the worker/wrapper phase inlines it. -- Result: happiness Tracking down the note in MkId.lhs (hopefully this is the right note and version, sorry if it's not): Note [lazyId magic] ~~~~~~~~~~~~~~~~~~~ lazy :: forall a?. a? -> a? (i.e. works for unboxed types too) Used to lazify pseq: pseq a b = a `seq` lazy b Also, no strictness: by being a built-in Id, all the info about lazyId comes from here, not from GHC.Base.hi. This is important, because the strictness analyser will spot it as strict! Also no unfolding in lazyId: it gets "inlined" by a HACK in CorePrep. It's very important to do this inlining after unfoldings are exposed in the interface file. Otherwise, the unfolding for (say) pseq in the interface file will not mention 'lazy', so if we inline 'pseq' we'll totally miss the very thing that 'lazy' was there for in the first place. See Trac #3259 for a real world example. lazyId is defined in GHC.Base, so we don't have to inline it. If it appears un-applied, we'll end up just calling it. I don't understand that because it refers to lazyId instead of lazy. How does lazy work?

    Read the article

  • EJB3 Transaction Propogation

    - by Matt S.
    I have a stateless bean something like: @Stateless public class MyStatelessBean implements MyStatelessLocal, MyStatelessRemote { @PersistenceContext(unitName="myPC") private EntityManager mgr; @TransationAttribute(TransactionAttributeType.SUPPORTED) public void processObjects(List<Object> objs) { // this method just processes the data; no need for a transaction for(Object obj : objs) { this.process(obj); } } @TransationAttribute(TransactionAttributeType.REQUIRES_NEW) public void process(Object obj) { // do some work with obj that must be in the scope of a transaction this.mgr.merge(obj); // ... this.mgr.merge(obj); // ... this.mgr.flush(); } } The typically usage then is the client would call processObjects(...), which doesn't actually interact with the entity manager. It does what it needs to do and calls process(...) individually for each object to process. The duration of process(...) is relatively short, but processObjects(...) could take a very long time to run through everything. Therefore I don't want it to maintain an open transaction. I do need the individual process(...) operations to operate within their own transaction. This should be a new transaction for every call. Lastly I'd like to keep the option open for the client to call process(...) directly. I've tried a number of different transaction types: never, not supported, supported (on processObjects) and required, requires new (on process) but I get TransactionRequiredException every time merge() is called. I've been able to make it work by splitting up the methods into two different beans: @Stateless @TransationAttribute(TransactionAttributeType.NOT_SUPPORTED) public class MyStatelessBean1 implements MyStatelessLocal1, MyStatelessRemote1 { @EJB private MyStatelessBean2 myBean2; public void processObjects(List<Object> objs) { // this method just processes the data; no need for a transaction for(Object obj : objs) { this.myBean2.process(obj); } } } @Stateless public class MyStatelessBean2 implements MyStatelessLocal2, MyStatelessRemote2 { @PersistenceContext(unitName="myPC") private EntityManager mgr; @TransationAttribute(TransactionAttributeType.REQUIRES_NEW) public void process(Object obj) { // do some work with obj that must be in the scope of a transaction this.mgr.merge(obj); // ... this.mgr.merge(obj); // ... this.mgr.flush(); } } but I'm still curious if it's possible to accomplish this in one class. It looks to me like the transaction manager only operates at the bean level, even when individual methods are given more specific annotations. So if I mark one method in a way to prevent the transaction from starting calling other methods within that same instance will also not create a transaction, no matter how they're marked? I'm using JBoss Application Server 4.2.1.GA, but non-specific answers are welcome / preferred.

    Read the article

  • Writing a Jeweler Rakefile that adds dependencies depending on RUBY_ENGINE (ruby or jruby)

    - by Matt Zukowski
    I have a Rakefile that includes this: Jeweler::Tasks.new do |gem| # ... gem.add_dependency('json') end The gemspec that this generates builds a gem that can't be installed on jruby because the 'json' gem is native. For jruby, this would have to be: Jeweler::Tasks.new do |gem| # ... gem.add_dependency('json-jruby') end How do I conditionally add the dependency for 'json-jruby' when RUBY_ENGINE == 'java'? It seems like my only option is to manually edit the gemspec file that jeweler generates to add the RUBY_ENGINE check. But I'd rather avoid this, since it kind of defeats the purpose of using jeweler in the first place. Any ideas?

    Read the article

  • Globals are bad! But should I use them in this context?

    - by Matt
    Would setting the $link to my database be one thing that I should use a GLOBAL scope for? In my setting of (lots of functions)...it seems as though having only one variable that is in the global scope would be wise. I am currently using the functions to transfer it back and forth so that way I do not have it in the global scope. But it is a bit of a hindrance to my script. Please advise.

    Read the article

  • JavaScript types

    - by Alex Ivasyuv
    Hi, as per http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-262.pdf JavaScript has 6 types: undefined, null, boolean, string, number, object. var und; console.log(typeof und); // <-- undefined var n = null; console.log(typeof n); // <--- **object**! var b = true; console.log(typeof b); // <-- boolean var str = "myString" console.log(typeof str); // <-- string var int = 10; console.log(typeof int); // <-- number var obj = {} console.log(typeof obj); // <-- object Question 1: Why null is object type, if it has to be a null type. Question 2: What about function? var f = function() {}; console.log(typeof f); // <-- function Variable f has "function" type. Why it doesn't specified in specification as separate type. Thanks,

    Read the article

  • Update table.column with another table.column with common joined column

    - by Matt
    Hit a speed bump, trying to update some column values in my table from another table. This is what is supposed to happen when everything works Correct all the city, state entries in tblWADonations by creating an update statement that moves the zip city from the joined city/state zip field to the tblWADonations city state TBL NAME | COLUMN NAMES tblZipcodes with zip,city,State tblWADonations with zip,oldcity,oldstate This is what I have so far: UPDATE tblWADonations SET oldCity = tblZipCodes.city, oldState = tblZipCodes.state FROM tblWADonations INNER JOIN tblZipCodes ON tblWADonations.zip = tblZipCodes.zip Where oldCity <> tblZipcodes.city; There seems to be easy ways to do this online but I am overlooking something. Tried this by hand and in editor this is what it kicks back. Msg 8152, Level 16, State 2, Line 1 String or binary data would be truncated. The statement has been terminated. Please include a sql statement or where I need to make the edit so I can mark this post as a reference in my favorites. Thanks!

    Read the article

  • AudioQueue ate my buffer (first 15 milliseconds of it)

    - by iter
    I am generating audio programmatically. I hear gaps of silence between my buffers. When I hook my phone to a scope, I see that the first few samples of each buffer are missing, and in their place is silence. The length of this silence varies from almost nothing to as much as 20 ms. My first thought is that my original callback function takes too much time. I replace it with the shortest one possible--it re-renqueues the same buffer over and over. I observe the same behavior. AudioQueueRef aq; AudioQueueBufferRef aq_buffer; AudioStreamBasicDescription asbd; void aq_callback (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) { OSStatus s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL); } void aq_init(void) { OSStatus s; asbd.mSampleRate = AUDIO_SAMPLES_PER_S; asbd.mFormatID = kAudioFormatLinearPCM; asbd.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; asbd.mBytesPerPacket = 1; asbd.mFramesPerPacket = 1; asbd.mBytesPerFrame = 1; asbd.mChannelsPerFrame = 1; asbd.mBitsPerChannel = 8; asbd.mReserved = 0; int PPM_PACKETS_PER_SECOND = 50; // one buffer is as long as one PPM frame int BUFFER_SIZE_BYTES = asbd.mSampleRate/PPM_PACKETS_PER_SECOND*asbd.mBytesPerFrame; s = AudioQueueNewOutput(&asbd, aq_callback, NULL, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aq); s = AudioQueueAllocateBuffer(aq, BUFFER_SIZE_BYTES, &aq_buffer); // put samples in the buffer buffer_data(my_data, aq_buffer); s = AudioQueueStart(aq, NULL); s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL); }

    Read the article

  • How to prevent MSBUILD from copying dependent GAC assemblies to bin

    - by Matt Wrock
    I have a msbuild task that builds my solution and I am migrating it from .net 3.5 to 4.0. I have some dependent DLLs that have Local Copy set to true. The 4.0 version of msbuild is not only copying the dependent DLL (which I want), it is also copying all dependent assemblies of that DLL from the 32 bit version of the GAC to my bin. Not only do I not want these files being copied from the GAC, I especially do not want the 32 bit versions for this 64 bit build. Has the behavior changed in msbuild 4.0? And does anyone know how to force msbuild to use the behavior in 3.5?

    Read the article

  • Is there any danger in calling free() or delete instead of delete[]? [closed]

    - by Matt Joiner
    Possible Duplicate: ( POD )freeing memory : is delete[] equal to delete ? Does delete deallocate the elements beyond the first in an array? char *s = new char[n]; delete s; Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array? For more complex types, would delete call the destructor of objects beyond the first one? Object *p = new Object[n]; delete p; How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region? What if the memory region was allocated with some overhang for performance reasons? For example one could assume that not all allocators would provide a granularity of a single byte. Then any particular allocation could exceed the required size for each element by a whole element or more. For primitive types, such as char, int, is there any difference between: int *p = new int[n]; delete p; delete[] p; free p; Except for the routes taken by the respective calls through the delete-free deallocation machinery?

    Read the article

  • WPF Animation / Processing priority

    - by Matt B
    Hi all, I have a button which has an animation (in xaml) on it's click event. Cool so far. Problem is that I also have processing occurring on the click event (so I can do stuff) - and this occurs first. How do I prioritise or re-order so that the animation takes place before any custom processing... Thanks.

    Read the article

  • AVL tree in C language

    - by I_S_W
    Hey all; i am currently doing a project that requires the use of AVL trees , the insert function i wrote for the avl does not seem to be working , it works for 3 or 4 nodes at maximum ; i would really appreciate your help The attempt is below enter code here Tree insert(Tree t,char name[80],int num) { if(t==NULL) { t=(Tree)malloc(sizeof(struct node)); if(t!=NULL) { strcpy(t->name,name); t->num=num; t->left=NULL; t->right=NULL; t->height=0; } } else if(strcmp(name,t->name)<0) { t->left=insert(t->left,name,num); if((height(t->left)-height(t->right))==2) if(strcmp(name,t->left->name)<0) { t=s_rotate_left(t);} else{ t=d_rotate_left(t);} } else if(strcmp(name,t-name)0) { t-right=insert(t-right,name,num); if((height(t-right)-height(t-left))==2) if(strcmp(name,t-right-name)0){ t=s_rotate_right(t); } else{ t=d_rotate_right(t);} } t-height=max(height(t-left),height(t-right))+1; return t; }

    Read the article

  • "Programming In Haskell" error in sat function

    - by Matt Ellen
    I'm in chapter 8 of Graham Hutton's Programming in Haskell and I'm copying the code and testing it in GHC. See the slides here: http://www.cis.syr.edu/~sueo/cis352/chapter8.pdf in particular slide 15 The relevant code I've copied so far is: type Parser a = String -> [(a, String)] pih_return :: a -> Parser a pih_return v = \inp -> [(v, inp)] failure :: Parser a failure = \inp -> [] item :: Parser Char item = \inp -> case inp of [] -> [] (x:xs) -> [(x,xs)] parse :: Parser a -> String -> [(a, String)] parse p inp = p inp sat :: (Char -> Bool) -> Parser Char sat p = do x <- item if p x then pih_return x else failure I have changed the name of the return function from the book to pih_return so that it doesn't clash with the Prelude return function. The errors are in the last function sat. I have copied this directly from the book. As you can probably see p is a function from Char to Bool (e.g. isDigit) and x is of type [(Char, String)], so that's the first error. Then pih_return takes a value v and returns [(v, inp)] where inp is a String. This causes an error in sat because the v being passed is x which is not a Char. I have come up with this solution, by explicitly including inp into sat sat :: (Char -> Bool) -> Parser Char sat p inp = do x <- item inp if p (fst x) then pih_return (fst x) inp else failure inp Is this the best way to solve the issue?

    Read the article

  • C file read leaves garbage characters

    - by KJ
    Hi. I'm trying to read the contents of a file into my program but I keep occasionally getting garbage characters at the end of the buffers. I haven't been using C a lot (rather I've been using C++) but I assume it has something to do with streams. I don't really know what to do though. I'm using MinGW. Here is the code (this gives me garbage at the end of the second read): include include char* filetobuf(char *file) { FILE *fptr; long length; char *buf; fptr = fopen(file, "r"); /* Open file for reading */ if (!fptr) /* Return NULL on failure */ return NULL; fseek(fptr, 0, SEEK_END); /* Seek to the end of the file */ length = ftell(fptr); /* Find out how many bytes into the file we are */ buf = (char*)malloc(length+1); /* Allocate a buffer for the entire length of the file and a null terminator */ fseek(fptr, 0, SEEK_SET); /* Go back to the beginning of the file */ fread(buf, length, 1, fptr); /* Read the contents of the file in to the buffer */ fclose(fptr); /* Close the file */ buf[length] = 0; /* Null terminator */ return buf; /* Return the buffer */ } int main() { char* vs; char* fs; vs = filetobuf("testshader.vs"); fs = filetobuf("testshader.fs"); printf("%s\n\n\n%s", vs, fs); free(vs); free(fs); return 0; } The filetobuf function is from this example http://www.opengl.org/wiki/Tutorial2:_VAOs,_VBOs,_Vertex_and_Fragment_Shaders_%28C_/_SDL%29. It seems right to me though. So anyway, what's up with that?

    Read the article

  • Linq to SQL Error with Many-to-Many table

    - by Matt Connolly
    I am getting the following error with linq-to-sql when I try to access a many-to-many collection: Members 'Int32 XXX' and 'Int32 YYY' both marked as IsPrimaryKey and IsDbGenerated. The statement is true, in that both of those columns are primary key integers with identity insert. The table I am trying to access has a foreign key to both YYY and ZZZ, and then ZZZ has a foreign key to XXX. There doesn't appear to be anything wrong with my data structure. I tried setting "Child Property" to false on the ZZZ-YYY relationship, but it didn't change anything.

    Read the article

  • jQuery counter to count up to a target number

    - by Matt Huggins
    I'm trying to find out if anyone knows about an already existing jQuery plugin that will count up to a target number at a specified speed. For example, take a look at Google's number of MB of free storage on the Gmail homepage, under the heading that reads "Lots of space". It has a starting number in a <span> tag, and slowly counts upward every second. I'm looking for something similar, but I'd like to be able to specify: The start number The end number The amount of time it should take to get from start to end. A custom callback function that can execute when a counter is finished.

    Read the article

  • Forking with Pipes

    - by Luke
    Hello I have tried to do fork() and piping in main and it works perfectly fine but when I try to implement it in a function for some reason I don't get any output, this is my code: void cmd(int **pipefd,int count,int type, int last); int main(int argc, char *argv[]) { int pipefd[3][2]; int i, total_cmds = 3,count = 0; int in = 1; for(i = 0; i < total_cmds;i++){ pipe(pipefd[count++]); cmd(pipefd,count,i,0); } /*Last Command*/ cmd(pipefd,count,i,1); exit(EXIT_SUCCESS); } void cmd(int **pipefd,int count,int type, int last){ int child_pid,i,i2; if ((child_pid = fork()) == 0) { if(count == 1){ dup2(pipefd[count-1][1],1); /*first command*/ } else if(last!=0){ dup2(pipefd[count - 2][0],0); /*middle commands*/ dup2(pipefd[count - 1][1],1); } else if(last == 1){ dup2(pipefd[count - 1][0],0); /*last command*/ } for(i = 0; i < count;i++){/*close pipes*/ for(i2 = 0; i2 < 2;i2++){ close(pipefd[i][i2]); }} if(type == 0){ execlp("ls","ls","-al",NULL); } else if(type == 1){ execlp("grep","grep",".bak",NULL); } else if(type==2){ execl("/usr/bin/wc","wc",NULL); } else if(type ==3){ execl("/usr/bin/wc","wc","-l",NULL); } perror("exec"); exit(EXIT_FAILURE); } else if (child_pid < 0) { perror("fork"); exit(EXIT_FAILURE); } } I checked the file descriptors and it is opening the right ones, not sure what the problem could be..

    Read the article

  • Cascading Deletes in SQL Sever 2008 not working.

    - by Vaccano
    I have the following table setup. Bag | +-> BagID (Guid) +-> BagNumber (Int) BagCommentRelation | +-> BagID (Int) +-> CommentID (Guid) BagComment | +-> CommentID (Guid) +-> Text (varchar(200)) BagCommentRelation has Foreign Keys to Bag and BagComment. So, I turned on cascading deletes for both those Foreign Keys, but when I delete a bag, it does not delete the Comment row. Do need to break out a trigger for this? Or am I missing something? (I am using SQL Server 2008) Note: Posting requested SQL. This is the defintion of the BagCommentRelation table. (I had the type of the bagID wrong (I thought it was a guid but it is an int).) CREATE TABLE [dbo].[Bag_CommentRelation]( [Id] [int] IDENTITY(1,1) NOT NULL, [BagId] [int] NOT NULL, [Sequence] [int] NOT NULL, [CommentId] [int] NOT NULL, CONSTRAINT [PK_Bag_CommentRelation] PRIMARY KEY CLUSTERED ( [BagId] ASC, [Sequence] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[Bag_CommentRelation] WITH CHECK ADD CONSTRAINT [FK_Bag_CommentRelation_Bag] FOREIGN KEY([BagId]) REFERENCES [dbo].[Bag] ([Id]) ON DELETE CASCADE GO ALTER TABLE [dbo].[Bag_CommentRelation] CHECK CONSTRAINT [FK_Bag_CommentRelation_Bag] GO ALTER TABLE [dbo].[Bag_CommentRelation] WITH CHECK ADD CONSTRAINT [FK_Bag_CommentRelation_Comment] FOREIGN KEY([CommentId]) REFERENCES [dbo].[Comment] ([CommentId]) ON DELETE CASCADE GO ALTER TABLE [dbo].[Bag_CommentRelation] CHECK CONSTRAINT [FK_Bag_CommentRelation_Comment] GO The row in this table deletes but the row in the comment table does not.

    Read the article

  • Before data is entered, is there a way to make a grouped table graphic placeholder?

    - by Matt Winters
    I have a grouped table with 3 sections, each section with a title. The 1st and 3rd sections always have only 1 row of information so before any user data is entered, I just put some words like "Enter Data Here..." as placeholder text. This text is edited (replaced) by the user with their own actual data. No problem. The 2nd section however will contain several rows of information entered by user and I'd prefer not to enter placeholder data in row 0, having the user Edit the first row of data then Add subsequent rows. If the numberOfRowsInSection is set to 0, the title for the 3rd section comes close to the title for the 2nd section and it looks ugly. The best that I could come with, and I don't know to do it, is to have a fake graphic placeholder on the striped background (between the 2nd and third titles) that looks like a single row in the 2nd section, put "Enter Data Here..." text in the graphic, and then the first row of actual data entered and all subsequent rows will cover it up. Can anyone tell me how to do this or offer a better suggestion. Thanks.

    Read the article

  • How to determine why visual studio might be skipping projects when building a solution

    - by Matt
    I am debugging someone else's work and the solution is quite large. When I try to build the entire thing, several projects within the solution don't build and just skip. Viewing the output window during the build process says: 1------ Skipped Rebuild All: Project: pr1lib ------ How can I determine why these builds were skipped? I am unable to find additional output. This is with VS2008 and the solution is comprised of c# and c++ code.

    Read the article

  • Updating RubyGems alongside an existing packaged installation on Joyent

    - by Matt
    On a Joyent accelerator I'm working with, Ruby and Rubygems were installed when the server was initially setup using Cool Stack. The existing version of Rubygems is version 0.9.2. When it comes to upgrading RubyGems using the 'sudo gem install rubygems-update' and 'sudo update_rubygems' commands, it results in the following error: ./lib/rubygems.rb:124: uninitialized constant Gem::RbConfig (NameError) from setup.rb:24:in `require' from setup.rb:24 Without having much success in rectifying this issue, I wanted to install a fresh version of RubyGems alongside this one. As this is a production server, I want to minimize the amount of environment changes I make on the server. If I install a fresh version of RubyGems from source, how do I set this new installed version of RubyGems as the default version of RubyGems to use?

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >