Search Results

Search found 1044 results on 42 pages for 'thomas jespersen'.

Page 41/42 | < Previous Page | 37 38 39 40 41 42  | Next Page >

  • Getting DirectoryNotFoundException when trying to Connect to Device with CoreCon API

    - by ageektrapped
    I'm trying to use the CoreCon API in Visual Studio 2008 to programmatically launch device emulators. When I call device.Connect(), I inexplicably get a DirectoryNotFoundException. I get it if I try it in PowerShell or in C# Console Application. Here's the code I'm using: static void Main(string[] args) { DatastoreManager dm = new DatastoreManager(1033); Collection<Platform> platforms = dm.GetPlatforms(); foreach (var p in platforms) { Console.WriteLine("{0} {1}", p.Name, p.Id); } Platform platform = platforms[3]; Console.WriteLine("Selected {0}", platform.Name); Device device = platform.GetDevices()[0]; device.Connect(); Console.WriteLine("Device Connected"); SystemInfo info = device.GetSystemInfo(); Console.WriteLine("System OS Version:{0}.{1}.{2}", info.OSMajor, info.OSMinor, info.OSBuildNo); Console.ReadLine(); } My question: Does anyone know why I'm getting this error? I'm running this on WinXP 32-bit, plain jane Visual Studio 2008 Pro. I imagine it's some config issue since I can't do it from a Console app or PowerShell. Here's the stack trace as requested: System.IO.DirectoryNotFoundException was unhandled Message="The system cannot find the path specified.\r\n" Source="Device Connection Manager" StackTrace: at Microsoft.VisualStudio.DeviceConnectivity.Interop.ConManServerClass.ConnectDevice() at Microsoft.SmartDevice.Connectivity.Device.Connect() at ConsoleApplication1.Program.Main(String[] args) in C:\Documents and Settings\Thomas\Local Settings\Application Data\Temporary Projects\ConsoleApplication1\Program.cs:line 23 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • How to get a volume measurement of iPhone recording in dB, with a limit of at least 120dB

    - by Cyber
    Hello, I am trying to make a simple volume meter for the iPhone. I want the volume displayed in dB. When using this turorial, I am only getting measurements up to 78 dB. I've read that that is because the dBFS spectrum for 16 bit audio recordings is only 96 dB. I tried modifying this piece of code in the init funcyion: dataFormat.mSampleRate = 44100.0f; dataFormat.mFormatID = kAudioFormatLinearPCM; dataFormat.mFramesPerPacket = 1; dataFormat.mChannelsPerFrame = 1; dataFormat.mBytesPerFrame = 2; dataFormat.mBytesPerPacket = 2; dataFormat.mBitsPerChannel = 16; dataFormat.mReserved = 0; I changed the value of mBitsPerChannel, hoping to increase the bit value of the recording. dataFormat.mBitsPerChannel = 32; With that variable set to 32, the "mAveragePower" function returns only 0. So, how can i measure more decibels? All my code is practically the same as in the tutorial i posted above. Thanks in advance, Thomas

    Read the article

  • String method crashes program...

    - by TimothyTech
    Alright so i have two identical string methods... string CreateCust() { string nameArray[] ={"Tom","Timo","Sally","Kelly","Bob","Thomas","Samantha","Maria"}; int d = rand() % (8 - 1 + 1) + 1; string e = nameArray[d]; return e; } string CreateFood() { string nameArray[] = {"spagetti", "ChickenSoup", "Menudo"}; int d = rand() % (3 - 1 + 1) + 1; string f = nameArray[d]; return f; } however no matter what i do it the guts of CreateFood it will always crash. i created a test chassis for it and it always fails at the cMeal = CreateFood(); Customer Cnow; cout << "test1" << endl; cMeal = Cnow.CreateFood(); cout << "test1" << endl; cCustomer = Cnow.CreateCust(); cout << "test1" << endl; i even switched CreateCust with CreateFood and it still fails at the CreateFood Function... NOTE: if i make createFood a int method it does work...

    Read the article

  • Perform Grouping of Resultsets in Code, not on Database Level

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it? EDIT I have attempted to group the result in SQL using the exact same grouping query suggested by Thomas Levesque and both times our entire RDBMS crashed trying to process the query. I was under the impression that Linq was not available until .NET 3.5. This is a .NET 2.0 web application so I did not think it was an option. Am I wrong in thinking that? EDIT Starting a bounty because I believe this would be a good technique to have in the toolbox to use no matter where the different resultsets are coming from. I believe knowing the most concise way to group any 2 somewhat similar sets of data in code (without .NET LINQ) would be beneficial to more people than just me.

    Read the article

  • should variable be released or not? iphone-sdk

    - by psebos
    Hi, In the following piece of code (from a book) data is an NSDictionary *data; defined in the header (no property). In the viewDidLoad of the controller the following occurs: - (void)viewDidLoad { [super viewDidLoad]; NSArray *keys = [NSArray arrayWithObjects:@"home", @"work", nil]; NSArray *homeDVDs = [NSArray arrayWithObjects:@"Thomas the Builder", nil]; NSArray *workDVDs = [NSArray arrayWithObjects:@"Intro to Blender", nil]; NSArray *values = [NSArray arrayWithObjects:homeDVDs, workDVDs, nil]; data = [[NSDictionary alloc] initWithObjects:values forKeys:keys]; } Since I am really new to objective-c can someone explain to me why I do not have to retain the variables keys,homeDVDs,workDVDs and values prior exiting the function? I would expect prior the data allocation something like: [keys retain]; [homeDVDs retain]; [workDVDs retain]; [values retain]; or not? Does InitWithObjects copies (recursively) all objects into a new table? Assuming we did not have the last line (data allocation) should we release all the NSArrays prior exiting the function (or we could safely assumed that all NSArrays will be autoreleased since there is no alloc for each one?) Thanks!!!!

    Read the article

  • Making RDoc Ruby Gem Default on Mac OS X

    - by jkale
    Hey all, I've recently installed RDoc version (2.4.3) through Ruby gems to replace the one shipped with Mac OS X (version 1.0.1). Unfortunately, I can still only use RDoc 1.0.1 when I call run "rdoc" at the command line. rdoc -v returns: RDoc V1.0.1 - 20041108 I tried amending the $PATH variable to point the first entry to the RDoc 2.4.3 folder but no luck. I couldn't find anything about this online either, so I thought I'd ask here. Cheers! Update: Running "gem list -d --version 1.0.1 rdoc" returns: *** LOCAL GEMS *** rdoc (2.4.3) Authors: Eric Hodel, Dave Thomas, Phil Hagelberg, Tony Strauss Rubyforge: http://rubyforge.org/projects/rdoc Homepage: http://rdoc.rubyforge.org Installed at: /usr/local/lib/ruby/gems/1.8 RDoc is an application that produces documentation for one or more Ruby source files Therefore, it's definitely the Mac OSX version of RDoc that's interfering with the Gems version. Update 2: I found out, using: `bash --debugger rdoc` that the old version of RDoc was in /opt/local/bin. I deleted it and added my gems directory to my $PATH `export PATH=/usr/local/lib/ruby/gems/1.8/gems/` I now have a fresh working copy of the latest RDoc!

    Read the article

  • How do I [legally] get the current first responder on the screen on an iPhone?

    - by Anthony D
    I submitted my app a little over a week ago and got the dreaded rejection email today. It reads as follows: Dear -----------, Thank you for submitting --------- to the App Store. Unfortunately it cannot be added to the App Store because it is using a private API. Use of non-public APIs, which as outlined in the iPhone Developer Program License Agreement section 3.3.1 is prohibited: "3.3.1 Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs." The non-public API that is included in your application is firstResponder. Regards, iPhone Developer Program Now, the offending API call is actually a solution I found here on SO: UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow]; UIView *firstResponder = [keyWindow performSelector:@selector(firstResponder)]; So this is my question; How do I get the current first responder on the screen? I'm looking for a legal way that won't get my app rejected. Thanks. I figured this out based on the solution provided by Thomas below. Here is what the final code looks like: @implementation UIView (FindFirstResponder) - (UIView *)findFirstResonder { if (self.isFirstResponder) { return self; } for (UIView *subView in self.subviews) { UIView *firstResponder = [subView findFirstResonder]; if (firstResponder != nil) { return firstResponder; } } return nil; } @end

    Read the article

  • sorting names in a linked list

    - by sil3nt
    Hi there, I'm trying to sort names into alphabetical order inside a linked list but am getting a run time error. what have I done wrong here? #include <iostream> #include <string> using namespace std; struct node{ string name; node *next; }; node *A; void addnode(node *&listpointer,string newname){ node *temp; temp = new node; if (listpointer == NULL){ temp->name = newname; temp->next = listpointer; listpointer = temp; }else{ node *add; add = new node; while (true){ if(listpointer->name > newname){ add->name = newname; add->next = listpointer->next; break; } listpointer = listpointer->next; } } } int main(){ A = NULL; string name1 = "bob"; string name2 = "tod"; string name3 = "thomas"; string name4 = "kate"; string name5 = "alex"; string name6 = "jimmy"; addnode(A,name1); addnode(A,name2); addnode(A,name3); addnode(A,name4); addnode(A,name5); addnode(A,name6); while(true){ if(A == NULL){break;} cout<< "name is: " << A->name << endl; A = A->next; } return 0; }

    Read the article

  • pdfmark for docinfo metadata in pdf is not accepting accented characters in Keywords or Subject

    - by rpilkey
    I am inserting metadata into postscript files with a program, to be distilled to pdf with Adobe Distiller. I am using this code that I grabbed from Thomas Merz's "Web Publishing with Acrobat-PDF": /pdfmark where {pop} {userdict /pdfmark /cleartomark load put} ifelse [ /Title (mot accenté) /Author (mot accenté) /Subject (mot accenté) /Keywords (mot accenté) /DOCINFO pdfmark When you look at the metadata, the accented characters turn into "?" in the Subject and Keyword fields, but not the Title and Author fields. The characters are the same ascii 233 I tried replacing them with octal encoding (\351), which came out the same (Title and Author okay, Subject and Keywords messed up). file encoding is latin-1,unix eol I found a mention on adobe forums, but the answer didn't make sense to me. http://forums.adobe.com/message/1165593 I changed the encoding to utf-8, inserted the characters binarily (in VIM : <Ctrl-v>u00e9), no change. I tried inserting the BOM in a few places, it didn't work. This is with Acrobat Pro 9 I didn't notice this problem with Acrobat Pro 7. Does anybody know of a workaround to get the accented characters into ALL the metadata fields when modifying a postscript file, or tell me if I'm doing it wrong? It seems weird that different fields would not accept the same bytes.

    Read the article

  • Encapsulate update method inside of object or have method which accepts an object to update

    - by Tom
    Hi, I actually have 2 questions related to each other: I have an object (class) called, say MyClass which holds data from my database. Currently I have a list of these objects ( List < MyClass ) that resides in a singleton in a "communal area". I feel it's easier to manage the data this way and I fail to see how passing a class around from object to object is beneficial over a singleton (I would be happy if someone can tell me why). Anyway, the data may change in the database from outside my program and so I have to update the data every so often. To update the list of the MyClass I have a method called say, Update, written in another class which accepts a list of MyClass. This updates all the instances of MyClass in the list. However would it be better instead to encapulate the Update() method inside the MyClass object, so instead I would say foreach(MyClass obj in MyClassList) { obj.update(); } What is a better implementation and why? The update method requires a XML reader. I have written an XML reader class which is basically a wrapper over the standard XML reader the language natively provides which provides application specific data collection. Should the XML reader class be in anyway in the "inheritance path" of the MyClass object - the MyClass objects inherits from the XML reader because it uses a few methods. I can't see why it should. I don't like the idea of declaring an instance of the XML Reader class inside of MyClass and an MyClass object is meant to be a simple "record" from the database and I feel giving it loads of methods, other object instances is a bit messy. Perhaps my XML reader class should be static but C#'s native XMLReader isn't static.? Any comments would be greatly appreciated Thanks Thomas

    Read the article

  • making clean page via page.tpl.php

    - by user360051
    I have a Drupal module creating a page via hook_menu(). I am trying to make it so the page has no extraneous html output, only what I want. You can view the page here, http://www.thomashansen.me/chat/thomas. If you look at the source, you can see a strange script tag at the end. My page-chat.tpl.php looks like this, <?php // $Id$ ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="<?php print $language->language ?>" lang="<?php print $language->language ?>" dir="<?php print $language->dir ?>"> <head> </head> <body> <?php print $content; ?> </body> </html> Where is that script tag coming from? and how do I get rid of it? If you need more information just ask.

    Read the article

  • Mysql count and sum from two diferent tables

    - by Agent_x
    Hi all, i have a problem with some querys in php and mysql: I have 2 diferent tables with one field in common: table 1 id | hits | num_g | cats | usr_id |active 1 | 10 | 11 | 1 | 53 | 1 2 | 13 | 16 | 3 | 53 | 1 1 | 10 | 22 | 1 | 22 | 1 1 | 10 | 21 | 3 | 22 | 1 1 | 2 | 6 | 2 | 11 | 1 1 | 11 | 1 | 1 | 11 | 1 table 2 id | usr_id | points 1 | 53 | 300 Now i use this statement to sum just the total from the table 1 every id count + 1 too SELECT usr_id, COUNT( id ) + SUM( num_g + hits ) AS tot_h FROM table1 WHERE usr_id!='0' GROUP BY usr_id ASC LIMIT 0 , 15 and i get the total for each usr_id usr_id| tot_h | 53 | 50 22 | 63 11 | 20 until here all is ok, now i have a second table with extra points (table2) I try this: SELECT usr_id, COUNT( id ) + SUM( num_g + hits ) + (SELECT points FROM table2 WHERE usr_id != '0' ) AS tot_h FROM table1 WHERE usr_id != '0' GROUP BY usr_id ASC LIMIT 0 , 15 but it seems to sum the 300 extra points to all users: usr_id| tot_h | 53 | 350 22 | 363 11 | 320 Now how i can get the total like the first try but + the secon table in one statement? because now i have just one entry in the second table but i can be more there. thanks for all the help. =============================================================================== hi thomas thanks for your reply, i think is in the right direction, but im getting weirds results, like usr_id | tot_h 22 | NULL <== i think the null its because that usr_id as no value in the table2 53 | 1033 Its like the second user is getting all the the values. then i try this one: SELECT table1.usr_id, COUNT( table1.id ) + SUM( table1.num_g + table1.hits + table2.points ) AS tot_h FROM table1 LEFT JOIN table2 ON table2.usr_id = table1.usr_id WHERE table1.usr_id != '0' AND table2.usr_id = table1.usr_id GROUP BY table1.usr_id ASC Same result i just get the sum of all values and not by each user, i need something like this result: usr_id | tot_h 53 | 53 <==== plus 300 points on table1 22 | 56 <==== plus 100 points on table2 /////////the result i need //////////// usr_id | tot_h 53 | 353 <==== plus 300 points on table2 22 | 156 <==== plus 100 points on table2 I think the structure need to be something like this Pseudo statements ;) from table1 count all id to get the number of record where the usr_id are then sum hits + num_g and from table2 select the extra points where the usr_id are the same as table1 and get teh result: usr_id | tot_h 53 | 353 22 | 156

    Read the article

  • Emails not being delivered

    - by Tomtiger11
    Comment pointed out that this may fix my problem, and it did: Why don't mails show up in the recipient's mailspool? I use Postfix with Dovecot, and when I send an email from my gmail to my server, it is received at the server, but not at my email client using POP3. I can verify it being received at the server using the mail command. This is my main.cf: queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix mail_owner = postfix myhostname = tom4u.eu myorigin = $myhostname inet_interfaces = all inet_protocols = all unknown_local_recipient_reject_code = 550 relay_domains = $mydomain alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES smtpd_tls_cert_file = /etc/postfix/certs/cert.pem milter_protocol = 2 milter_default_action = accept smtpd_milters = inet:localhost:8891 non_smtpd_milters = inet:localhost:8891 smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = $myhostname smtpd_recipient_restrictions = reject_non_fqdn_recipient,permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination,permit broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth If you could help me with this, I'd be most grateful, if you need any more information, please ask. var/log/maillog: May 30 22:44:25 tom4u postfix/smtpd[18626]: connect from mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/smtpd[18626]: 318F679B7F: client=mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/cleanup[18631]: 318F679B7F: message-id=<CAA_0zdxY-WUFGOC57K_yVn0G+5hN=8KSXuohJqMDB5Rm7bqu8w@mail.gmail.com> May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: mail-we0-f181.google.com [74.125.82.181] not internal May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: not authenticated May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: DKIM verification successful May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: s=20120113 d=gmail.com SSL May 30 22:44:25 tom4u postfix/qmgr[16282]: 318F679B7F: from=<[email protected]>, size=1720, nrcpt=1 (queue active) May 30 22:44:25 tom4u postfix/smtpd[18626]: disconnect from mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/local[18632]: 318F679B7F: to=<[email protected]>, relay=local, delay=0.17, delays=0.12/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to mailbox) May 30 22:44:25 tom4u postfix/qmgr[16282]: 318F679B7F: removed May 30 22:45:32 tom4u dovecot: pop3-login: Login: user=<tom>, method=PLAIN, rip=SNIP, lip=176.31.127.165, mpid=18679 May 30 22:45:32 tom4u dovecot: pop3(tom): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0 May 30 22:46:32 tom4u dovecot: pop3-login: Login: user=<tom>, method=PLAIN, rip=SNIP, lip=176.31.127.165, mpid=18725 May 30 22:46:32 tom4u dovecot: pop3(tom): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0

    Read the article

  • SQL Saturday #44 Huntington Beach Recap

    What a great day. It was long and tiring, but rewarding in so many ways. On Sunday morning, I was driving home and I decided to take the Pacific Coast Highway from Huntington Beach.  It was a great chance to exhale and just enjoy the sun and smells of the beach (I really love SoCal sometimes). And for future reference for all you speakers, the beach and ocean are only 5 minutes from the SQL Saturday location.  I just could help noticing also the shocking number of high priced cars on the road (4 Bentleys, 3 Ferraris, 1 Aston Martins, 3 Maserati, 1 Rolls Royce, and 2 Lamborghinis).  It made me think about this: Price of all those cars: $ 150,000+.  Impacting the ability of people to learn: Priceless.  We have positively impacted the education, knowledge, capabilities of not only our attendees, but also all of their companies and people they might help as well.  That is just staggering and something to be immensely proud of. To all of my fellow community leaders, I salute you. So lets talk about the event Overall We had over 220 people register for the event and had 180+ people attend the event. I was shooting for the magical 200 number, but I guess it just gives us more motivation to make it even bigger and better next time. We had a few snags along the way, but what event doesnt, but I think everything turned out great. I did not hear any negative comments and heard lots of positive comments along with people asking when the next one is going to be (More on that later). Location- Golden West College We could not have asked for a better partner for the event. Herb Cohen from Golden West College was the wizard behind the curtains. From the beginning, he was our advocate to the GWC Board and was instrumental in getting our event approved. The day off, Herb was a HUGE help getting any and all logistics that we needed taken care of. In the craziness of the early morning registration crush it was a big help knowing that he and Bret Stateham (Blog | Twitter) were taking care of testing projectors in all the rooms. Anything we needed he was there and was even proactive in getting some things that I had not even thought of (i.e. a dumpster for all of our garbage). I cannot thank Herb enough along with other members of the GWC staff including Minnie Higgins of the Career and Technical Education Division office, Jack Taylor, public safety, and Ron Pryor, Tech Services Support. And last, but not least, the Wireless on campus was absolutely FANTASTIC! Some lessons learned Unless you are a glutton for punishment, as I no doubt am, you most certainly want to give yourself more than six weeks to plan the event. I am lucky that I have a very understanding wife and had a wonderful set of co-coordinators helping me out. A big thanks goes out to Phil, Marlon (Blog | Twitter), Nitin (Twitter), Thomas (Blog | Twitter), Bret (Blog | Twitter), Ben, and Laurie. Thankfully, the sponsor and speaker community was hugely supportive and we were able to fill out the entire event with speakers and sponsors. I have to say that there is not a lot that I would change after this years event. There are obviously going to be some things that we can do better or differently next time, but overall I think it was a great event and I was more than happy with the response we received from the community. Sponsors We obviously could not have put together our event without our sponsors. So certainly have to show them some love. Platinum Sponsors Quest Software http://www.quest.com My Space http://www.myspace.com/ Gold Strategy Companion http://www.strategycompanion.com Silver Fusion-IO http://www.fusionio.com Bronze WestClinTech http://westclintech.com Professional Association For SQL Server http://www.sqlpass.org Attunity http://www.attunity.com Sharepoint 360 http://www.sharepoint360.com Some additional Thanks Andy Warren (Blog | Twitter) Always there to answer my question and help out when I had some issues or questions with the website. The amount of work that he and everyone else put into SQL Saturday is very amazing. What a great gift to the community! Einstein Bros. Bagels They were our Breakfast Vendor and arrived perfectly on time with yummy bagels, sweets and most importantly coffee. Luccis Deli (http://www.luccisdeli.com) Luccis was out Lunch Vendor. They were great to work with and the food was excellent. They worked with us to give us a great price. Heard lots of great comments about the lunches. Definitely not your ordinary box lunch. Moving Forward Unfortunately, the work does not end after the event. We have a few things to clear up such as surveys, sponsor stuff, presentations uploaded to the website, expense reimbursement, stuff like that. Hopefully, all that should be cleared up within the next couple weeks. After that as a group we are going to get together and decide what our next steps are. We definitely want to keep some of the momentum that we are building as a SQL Community and channel that into future SQL Saturdays and other types of community events. In the meantime, for additional training be sure to check out your local User Group and PASS. San Diego SQL Server Users Group ( http://www.sdsqlug.org/home/index.cfm ) Orange County SQL Server Users Group ( http://www.sqloc.com/ ) L.A. SQL Server Users Group ( http://www.sql.la/ ) SQL PASS ( http://www.sqlpass.org/ ) 24 Hours of PASS ( http://www.sqlpass.org/24hours/2010/ ) So stay tuned, there will be more events to come in SoCal!!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • How do you use blog content?

    - by fatherjack
    Do you write a blog, have you ever thought about it? I think people fall into one of a few categories when it comes to blogs, especially blogs with technical content. Writing articles furiously - daily, twice daily and reading dozens of others. Writing the odd piece of content and read plenty of others' output. Started a blog once and its fizzled out but reading lots. Thought about starting a blog someday but never got around to it, hopping into the occasional blog when a link or a Tweet takes them there. Never thought about writing one but often catching content from them when Google (or other preferred search engine) finds content related to their search. Now I am not saying that either of these is right or wrong, nor am I saying that anyone should feel any compulsion to be in any particular category. What I would say is that you as a blog reader have the power to move blog writers from one category to another. How, you might ask? How do I have any power over a blog writer? It is very simple - feedback. If you give feedback then the blog writer knows that they are reaching an audience, if there is no response then they we are simply writing down our thoughts for what could amount to nothing more than a feeble amount of exercise and a few more key stokes towards the onset of RSI. Most blogs have a mechanism to alert the writer when there are comments, and personally speaking, if an email is received saying there has been a response to a blog article then there is a rush of enthusiasm, a moment of excitement that someone is actually reading and considering the text that was submitted and made available for the whole world to read. I am relatively new to this blog game and could be in some extended honeymoon period as I have also recently been incorporated into the Simple Talk 'stable'. I can understand that once you get to the "Dizzy Heights of Ozar" (www.brentozar.com) then getting comments and feedback might not be such a pleasure and may even be rather more of a chore but that, I guess, is the price of fame. For us mere mortals starting out blogging, getting feedback (or even at the moment for me, simply the hope of getting feedback) is what keeps it going. The hope that you will pick a topic that hasn't been done recently by Brad McGehee, Grant Fritchey,  Paul Randall, Thomas LaRock or any one of the dozen of rock star bloggers listed here or others from SQLServerPedia and so on, and then do it well enough to be found, reviewed, or <shudder> (re)tweeted to bring more visitors is what we are striving for, along with the fact that the content we might produce is something that will be of benefit to others. There is only so much point to typing content that no-one is reading and putting it on a blog. You may as well just write it in a diary. A technical blog is not like, say, a blog covering photography techniques where the way to frame and take a picture stands true whether it was written last week, last year or last century - technical content goes sour, quite quickly. There isn't much call for articles about yesterdays technology unless its something that still applies to current versions too, so some content written no more than 2 years ago isn't worth having now. The combination of a piece of content that you know is going to not last long and the fact that no-one reads it is a strong force against writing anything else. Getting feedback counters that despair and gives a value to writing something new. I would say that any feedback is good but there are obviously comments that are just so negative or otherwise badly phrased that they would hasten the demise of a blog but, in general most feedback will encourage a writer. It may not be a comment that supports or agrees with the main theme of a post but if it generates discussion or opens up a previously unexplored viewpoint it is contributing to the blog and is therefore encouraging to the writer. Even if you only say "thank you" before you leave a blog, having taken a section of script to use for yourself or having been given a few links to some content that has widened your knowledge it will be so welcome to the blog owner. Isn't it also the decent thing to do, acknowledging that you have benefited from another's efforts?

    Read the article

  • Profit's COLLABORATE 10 Session Selections

    - by Aaron Lazenby
    COLLABORATE 2010 is a mere 11 days away (thanks for the reminder @ocp_advisor). Every year I publish my a list of the sessions I think reflect some of the more interesting people/trends in enterprise IT. I should be at all of these sessions, so drop by for a chat--I'll be the guy tapping out emails on my iPad... Monday, April 19 9:15 a.m. - Keynote: Transforming Customer Value, Delivering Highest Customer Service Location: Keynote Hall I never miss Charles Phillips when he speaks--it's one of the best opportunities to get an update on Oracle product developments and strategy. And there's certainly occasion for an update: this will be Phillips' first big presentation since the Oracle + Sun Strategy Update in late January. Phillips is appearing with Oracle Executive Vice President of Development Thomas Kurian which means there should be some excellent information about how customers are using Oracle's complete software and hardware stack to address enterprise IT challenges. The session should provide some excellent context for the rest of the week's session...don't miss it. 10:45 a.m. - Oracle Fusion Applications: Functional Overview Location: South Seas FI met Basheer Khan at COLLABORATE 08 in Denver and have followed his work ever since. He's a former member of the OAUG Board of Directors, an Oracle ACE, and a charismatic enterprise IT expert. Having worked with the Oracle Usability Advisory Board, Basheer should have some fascinating insights to share about the features and interface of Oracle's Fusine Applications. This session, along with Nadia Bendjedou's "10 Things You Can Do Today to Prepare for the Next Generation Applications" (on Tuesday, April 20 8:00 a.m. in room 3662) should give attendees the update they need about Oracle's next-generation applications.   1:15p.m. - E-Business Suite in the Amazon Cloud Location: South Seas HI did my first full-fledged cloud computing coverage at last year's COLLABORATE show (check out my interview with Oracle's Bill Hodak), where I first learned about Amazon's EC2 offering. I've since talked with several people who have provisioned server space on Amazon's cloud with great results. So I'm looking forward to watching the audience configure an instance of the Oracle E-Business Suite release 12 on the cloud while Chuck Edwards from Blue Gecko drives. This session should take some of the mist and vapor out of the cloud conversation.2:30 p.m. - "Zero Sign-on" to EBS - Enabling 96000 Users to Login to EBS Without User Maintenance Location: South Seas HI'll be sitting tight in South Seas H for the next session on Monday where Doug Pepka, a ten-year veteran of communications giant Comcast, will be walking attendees through a massive single sign-on (SSO) project across the enterprise. I'm working on a story about SSO for the August issue of Profit, so this session has real practical value to me. Plus the proliferation of user account logins--both personal and professional--makes this a critical usability/change management issue for IT leaders planning for successful long-term IT implementations.   Tuesday 8:00 am  - Information Architecture for Men in Kilts Location: SURF AGetting to a 8:00 a.m. presentation is a tall order in Las Vegas, but presenter Billy Cripe will make it worth your effort. Not only is the title of this session great, but the content should appeal to any IT strategist looking to push the limits of Web 2.0 technologies in the enterprise. Cripe is a product management director of Enterprise 2.0 and Enterprise Content Management at Oracle, author of Reshaping Your Business with Web 2.0, and a prolific blogger--he knows how information architecture is critical to and enterprise 2.0 implementation.    10:30a.m. - Oracle Virtualization: From Desktop to Data Center Location: REEF FData center virtualization is still one of the best ways to reduce the cost of running enterprise IT. With the addition of Sun products, Oracle has the industry's most comprehensive virtualization portfolio. I must admit, I'm no expert in this subject. So I'm looking forward to Monica Kumar's presentation so I can get up to speed.   Wednesday 8:00 a.m. - The Art of the Steal Location: Mandalay Bay Ballroom JMany will know Frank Abagnale from Steven Spielberg's 2002 film "Catch Me if You Can." The one-time con man and international fugitive who swindled $2.5 million in forged checks went on to help U.S. federal officials investigate fraud cases. Now the CEO of Abagnale and Associates, he has become an invaluable source to the business world on the subject of fraud and fraud protection. With identity theft and digital fraud still on the rise, this session should be an entertaining, and sobering, education on the threats facing businesses and customers around the world. A great way to start Wednesday.1:00 p.m. - Google Wave: Will it replace e-mail as we know it today? Location: SURF EBy many assessments (my own included), Google Wave is a bit of an open collaboration failure. It may seem like an odd reason for me to be excited about this session, but I'm looking forward to the chance to revisit the technology. Also, this is a great case study in connecting free, available Internet tools to existing enterprise computing environments--an issue that IT strategists must contend with as workers spreads out and choose their own productivity tools.  

    Read the article

  • Closing the gap between strategy and execution with Oracle Business Intelligence 11g

    - by manan.goel(at)oracle.com
    Wikipedia defines strategy as a plan of action designed to achieve a particular goal. An example of this is General Electric's acquisitions and divestiture strategy (plan) designed to propel GE to number 1 or 2 place (goal) in every business segment that it operated in. Execution on the other hand can be defined as the actions taken to getting things done. In GE's case execution will be steps followed for mergers/acquisitions or divestiture. Business press has written extensively about the importance of both strategy and execution in achieving desired business objectives. Perhaps the quote from Thomas Edison says it best - "vision without execution is hallucination". Conversely, it can be said that "execution without vision" is well may be "wishful thinking". Research overwhelmingly point towards the wide gap between strategy and execution. According to a published study, 49% of surveyed executives perceive a gap between their organizations' ability to develop and communicate sound strategies and their ability to implement those strategies. Further, of these respondents, 64% don't have full confidence that their companies will be able to close the gap. Having established the severity and importance of the problem let's talk about the reasons for the strategy-execution gap. The common reasons include: -        Lack of clearly defined goals -        Lack of consistent measure of success -        Lack of ownership -        Lack of alignment -        Lack of communication -        Lack of proper execution -        Lack of monitoring       There are multiple approaches to solving the problem including organizational development practices, technology enablement etc. In most cases a combination of approaches is required to achieve the desired result. For the purposes of this discussion, I'll focus on technology.  Imagine an integrated closed loop technology platform that automates the entire management cycle from defining strategy to assigning ownership to communicating goals to achieving alignment to collaboration to taking actions to monitoring progress and achieving mid course corrections. Besides, for best ROI and lowest TCO such a system should also have characteristics like:  Complete -        Full functionality -        Rich end user access Open -        Any data source -        Any business application -        Any technology stack  Integrated -        Common metadata -        Common security -        Common system management From a capabilities perspective the system should provide the following capabilities: Define -        Strategy -        Objectives -        Ownership -        KPI's Communicate -        Pervasive -        Collaborative -        Role based -        Secure Execute -        Integrated -        Intuitive -        Secure -        Ubiquitous Monitor -        Multiple styles and formats -        Exception based -        Push & Pull Having talked about the business problem and outlined the blueprint for a technology solution, let's talk about how Oracle Business Intelligence 11g can help. Oracle Business Intelligence is a comprehensive business intelligence solution for reporting, ad hoc query and analysis, OLAP, dashboards and scorecards. Oracle's best in class BI platform is based on an architecturally integrated technology foundation that provides a unified end user experience and features a Common Enterprise Information Model, with common security, query request generation and optimization, and system management. The BI platform is ·         Complete - meaning it delivers all modes and styles of BI including reporting, ad hoc query and analysis, OLAP, dashboards and scorecards with a rich end user experience that includes visualization, collaboration, alerts and notifications, search and mobile access. ·         Open - meaning the BI platform integrates with any data source, ETL tool, business application, application server, security infrastructure, portal technology as well as any ODBC compliant third party analytical tool. The suite accesses data from multiple heterogeneous sources--including popular relational and multidimensional data sources and major ERP and CRM applications from Oracle and SAP. ·         Integrated - meaning the BI platform is based on an architecturally integrated technology foundation built on an open, standards based service oriented architecture.  The platform features a common enterprise information model, common security model and a common configuration, deployment and systems management framework. To summarize, Oracle Business Intelligence is a comprehensive, integrated BI platform that lets you define strategy, identify objectives, assign ownership, define KPI's, collaborate, take action, monitor, report and do course corrections all form a single interface and a single system. The platform's integrated metadata model and task based design ensures that the entire workflow from defining strategy to execution to monitoring is completely integrated delivering end to end visibility, transparency and agility. Click here to learn more about Oracle BI 11g. 

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • Silverlight with MVVM Inheritance: ModelView and View matching the Model

    - by moonground.de
    Hello Stackoverflowers! :) Today I have a special question on Silverlight (4 RC) MVVM and inheritance concepts and looking for a best practice solution... I think that i understand the basic idea and concepts behind MVVM. My Model doesn't know anything about the ViewModel as the ViewModel itself doesn't know about the View. The ViewModel knows the Model and the Views know the ViewModels. Imagine the following basic (example) scenario (I'm trying to keep anything short and simple): My Model contains a ProductBase class with a few basic properties, a SimpleProduct : ProductBase adding a few more Properties and ExtendedProduct : ProductBase adding another properties. According to this Model I have several ViewModels, most essential SimpleProductViewModel : ViewModelBase and ExtendedProductViewModel : ViewModelBase. Last but not least, according Views SimpleProductView and ExtendedProductView. In future, I might add many product types (and matching Views + VMs). 1. How do i know which ViewModel to create when receiving a Model collection? After calling my data provider method, it will finally end up having a List<ProductBase>. It containts, for example, one SimpleProduct and two ExtendedProducts. How can I transform the results to an ObservableCollection<ViewModelBase> having the proper ViewModel types (one SimpleProductViewModel and two ExtendedProductViewModels) in it? I might check for Model type and construct the ViewModel accordingly, i.e. foreach(ProductBase currentProductBase in resultList) if (currentProductBase is SimpleProduct) viewModels.Add( new SimpleProductViewModel((SimpleProduct)currentProductBase)); else if (currentProductBase is ExtendedProduct) viewModels.Add( new ExtendedProductViewModels((ExtendedProduct)currentProductBase)); ... } ...but I consider this very bad practice as this code doesn't follow the object oriented design. The other way round, providing abstract Factory methods would reduce the code to: foreach(ProductBase currentProductBase in resultList) viewModels.Add(currentProductBase.CreateViewModel()) and would be perfectly extensible but since the Model doesn't know the ViewModels, that's not possible. I might bring interfaces into game here, but I haven't seen such approach proven yet. 2. How do i know which View to display when selecting a ViewModel? This is pretty the same problem, but on a higher level. Ended up finally having the desired ObservableCollection<ViewModelBase> collection would require the main view to choose a matching View for the ViewModel. In WPF, there is a DataTemplate concept which can supply a View upon a defined DataType. Unfortunately, this doesn't work in Silverlight and the only replacement I've found was the ResourceSelector of the SLExtensions toolkit which is buggy and not satisfying. Beside that, all problems from Question 1 apply as well. Do you have some hints or even a solution for the problems I describe, which you hopefully can understand from my explanation? Thank you in advance! Thomas

    Read the article

  • Loop through children and display each, as3

    - by VideoDnd
    How do I loop through all of my children, and display each? I would like to know the best way to do this. my children and containerfive children, one plays every sec, 1,2,3, etc. var square1:Square1 = new Square1; var square2:Square2 = new Square2; var square3:Square3 = new Square3; var square4:Square4 = new Square4; var square5:Square5 = new Square5; var container:Sprite = new Sprite; addChild(container); container.addChild(square1) container.addChild(square2) container.addChild(square3) container.addChild(square4) container.addChild(square5) my timer var timly:Timer = new Timer(1000, 5); timly.start(); timly.addEventListener(TimerEvent.TIMER, onLoop); Note: Tried for loop, numChildren -1, and visibility ERROR 'access of undefined property' //Thomas's idea var timly:Timer = new Timer(1000, 10); timly.start(); timly.addEventListener(TimerEvent.TIMER, onLoop, false, 0, true); // var square1:Square1 = new Square1; square1.visible = false container.addChild(square2) var square2:Square2 = new Square2; square2.visible = false container.addChild(square3) var square3:Square3 = new Square3; square3.visible = false container.addChild(square3) var square4:Square4 = new Square4; square4.visible = false container.addChild(square4) var square5:Square5 = new Square5; square5.visible = false container.addChild(square5) var container:Sprite = new Sprite; this.addChild(container); var curCount:Number = 100; // function collectChildren(container:DisplayObjectContainer):Array { var len:int = container.numChildren; var mySquaresArray:Array = []; for (var i:int = 0; i < len; i++) { mySquaresArray.push(container.getChildAt(i).name); } return mySquaresArray; } // function onLoop( e:Event ) { curCount = e.target.currentCount; if( curCount > 1 ) { var previous_square = curCount -2; mySquaresArray[previous_square].visible = false; } var current_square = curCount - 1; mySquaresArray[current_square].visible = true; }

    Read the article

  • Mootools 1.2.4 delegation not working in IE8...?

    - by michael
    Hey there everybody-- So I have a listbox next to a form. When the user clicks an option in the select box, I make a request for the related data, returned in a JSON object, which gets put into the form elements. When the form is saved, the request goes thru and the listbox is rebuilt with the updated data. Since it's being rebuilt I'm trying to use delegation on the listbox's parent div for the onchange code. The trouble I'm having is with IE8 (big shock) not firing the delegated event. I have the following HTML: <div id="listwrapper" class="span-10 append-1 last"> <select id="list" name="list" size="20"> <option value="86">Adrian Franklin</option> <option value="16">Adrian McCorvey</option> <option value="196">Virginia Thomas</option> </select> </div> and the following script to go with it: window.addEvent('domready', function() { var jsonreq = new Request.JSON(); $('listwrapper').addEvent('change:relay(select)', function(e) { alert('this doesn't fire in IE8'); e.stop(); var status= $('statuswrapper').empty().addClass('ajax-loading'); jsonreq.options.url = 'de_getformdata.php'; jsonreq.options.method = 'post'; jsonreq.options.data = {'getlist':'<?php echo $getlist ?>','pkey':$('list').value}; jsonreq.onSuccess = function(rObj, rTxt) { status.removeClass('ajax-loading'); for (key in rObj) { status.set('html','You are currently editing '+rObj['cname']); if ($chk($(key))) $(key).value = rObj[key]; } $('lalsoaccomp-yes').set('checked',(($('naccompkey').value > 0)?'true':'false')); $('lalsoaccomp-no').set('checked',(($('naccompkey').value > 0)?'false':'true')); } jsonreq.send(); }); }); (I took out a bit of unrelated stuff). So this all works as expected in firefox, but IE8 refuses to fire the delegated change event on the select element. If I attach the change function directly to the select, then it works just fine. Am I missing something? Does IE8 just not like the :relay? Sidenote: I'm very new to mootools and javascripting, etc, so if there's something that can be improved code-wise, please let me know too.. Thanks!

    Read the article

  • ajaxSubmit options success & error functions aren't fired

    - by Thommy Tomka
    jQuery 1.7.2 jQuery Validate 1.1.0 jQuery Form 3.18 Wordpress 3.4.2 I am trying to code a contact/ mail form in above environment/ with above jQuery libs. Now I am having a problem with the jQuery Form JS: I have taken the original code from the developers page for ajaxSubmit and only altered the target option to an ID which exists in my HTML source and replaced $ with jQuery in function showRequest. The problem is, that the function namend after success: does not fire. I tried the same with error: and again nothing fired. Only complete: did and the function I placed there alerted the responseText from the receiving script. Does anyone has an idea whats going wrong? Thanks in advance! Thomas jQuery(document).ready(function() { var options = { target: '#mail-status', // target element(s) to be updated with server response beforeSubmit: showRequest, // pre-submit callback success: showResponse, // post-submit callback // other available options: //url: url // override for form's 'action' attribute //type: type // 'get' or 'post', override for form's 'method' attribute //dataType: null // 'xml', 'script', or 'json' (expected server response type) //clearForm: true // clear all form fields after successful submit //resetForm: true // reset the form after successful submit // $.ajax options can be used here too, for example: //timeout: 3000 }; jQuery("#mailform").validate( { submitHandler: function(form) { jQuery(form).ajaxSubmit(options); }, errorPlacement: function(error, element) { }, rules: { author: { minlength: 2, required: true }, email: { required: true, email: true }, comment: { minlength: 2, required: true } }, highlight: function(element) { jQuery(element).addClass("e"); jQuery(element.form).find("label[for=" + element.id + "]").addClass("e"); }, unhighlight: function(element) { jQuery(element).removeClass("e"); jQuery(element.form).find("label[for=" + element.id + "]").removeClass("e"); } }); }); // pre-submit callback function showRequest(formData, jqForm, options) { // formData is an array; here we use $.param to convert it to a string to display it // but the form plugin does this for you automatically when it submits the data var queryString = jQuery.param(formData); // jqForm is a jQuery object encapsulating the form element. To access the // DOM element for the form do this: // var formElement = jqForm[0]; alert('About to submit: \n\n' + queryString); // here we could return false to prevent the form from being submitted; // returning anything other than false will allow the form submit to continue return true; } // post-submit callback function showResponse(responseText, statusText, xhr, $form) { // for normal html responses, the first argument to the success callback // is the XMLHttpRequest object's responseText property // if the ajaxSubmit method was passed an Options Object with the dataType // property set to 'xml' then the first argument to the success callback // is the XMLHttpRequest object's responseXML property // if the ajaxSubmit method was passed an Options Object with the dataType // property set to 'json' then the first argument to the success callback // is the json data object returned by the server alert('status: ' + statusText + '\n\nresponseText: \n' + responseText + '\n\nThe output div should have already been updated with the responseText.'); }

    Read the article

  • Layout issue regarding a JQuery smoothDivScroll

    - by MichaelMcCabe
    Hi all. I have been scratching my head for a while on this one. I dont really play around with HTML formatting that often. I have a smoothDivScroll using JQuery (http://maaki.com/thomas/SmoothDivScroll/) to scroll through dynamically generated images. What I want it for, is to have multiple small images lined up right next to each other, and each image has a section of writting underneath. about 25 images, 140px width each in a 7560px span. I acheived this great by putting a table inside the div, and each image inside a<td>.This works problem i have, is that i need the slider to start from the furthest RIGHT image, and slide automatically left. This is acheived by (if you look at the link above) - startAtElementId: But if I am using a table inside the div, it is taking the TABLE as a element and not the TD... so it doesnt work. I know this because if I change every image to be inside tables (i.e 25 tables), it works... but obviously then the formatting is not how I want. Can anyone think of a way I can use this slider (modifications to the js code or not), with 25 images that NEED to be right next to each other, no gap. I guess this is more of a formatting question, than about the slider. Because I only know ONE way of doing what I want, and thats to have below which will not alow me to set startAtElementId. <div class="scrollableArea"> <table width="7560px" class="index-body"> <tr> <td width="140px"><center><a onclick="__doPostBack('getEarlierDate');" ><img src="../images/olderGraphs.gif" width="140" height="60"></a>Click for more</center></td> <td width="300px"><center><a onclick="__doPostBack('chartClicked25');" ><cewolf:img chartid="verticalbar25" renderer="cewolf" width="300" height="60"/></a><c:out value="${date25}"/></center></td> <td width="300px"><center><a onclick="__doPostBack('chartClicked24');" ><cewolf:img chartid="verticalbar24" renderer="cewolf" width="300" height="60"/></a><c:out value="${date24}"/></center></td> and so on

    Read the article

  • PeopleSoft Upgrades, Fusion, & BI for Leading European PeopleSoft Applications Customers

    - by Mark Rosenberg
    With so many industry-leading services firms around the globe managing their businesses with PeopleSoft, it’s always an adventure setting up times and meetings for us to keep in touch with them, especially those outside of North America who often do not get to join us at Oracle OpenWorld. Fortunately, during the first two weeks of May, Nigel Woodland (Oracle’s Service Industries Director for the EMEA region) and I successfully blocked off our calendars to visit seven different customers spanning four countries in Western Europe. We met executives and leaders at four Staffing industry firms, two Professional Services firms that engage in consulting and auditing, and a Financial Services firm. As we shared the latest information regarding product capabilities and plans, we also gained valuable insight into the hot technology topics facing these businesses. What we heard was both informative and inspiring, and I suspect other Oracle PeopleSoft applications customers can benefit from one or more of the following observations from our trip. Great IT Plans Get Executed When You Respect the Users Each of our visits followed roughly the same pattern. After introductions, Nigel outlined Oracle’s product and technology strategy, including a discussion of how we at Oracle invest in each layer of the “technology stack” to provide customers with unprecedented business management capabilities and choice. Then, I provided the specifics of the PeopleSoft product line’s investment strategy, detailing the dramatic number of rich usability and functionality enhancements added to release 9.1 since its general availability in 2009 and the game-changing capabilities slated for 9.2. What was most exciting about each of these discussions was that shortly after my talking about what customers can do with release 9.1 right now to drive up user productivity and satisfaction, I saw the wheels turning in the minds of our audiences. Business analyst and end user-configurable tools and technologies, such as WorkCenters and the Related Action Framework, that provide the ability to tailor a “central command center” to the exact needs of each recruiter, biller, and every other role in the organization were exactly what each of our customers had been looking for. Every one of our audiences agreed that these tools which demonstrate a respect for the user would finally help IT pole vault over the wall of resistance that users had often raised in the past. With these new user-focused capabilities, IT is positioned to definitively partner with the business, instead of drag the business along, to unlock the value of their investment in PeopleSoft. This topic of respecting the user emerged during our very first visit, which was at Vital Services Group at their Head Office “The Mill” in Manchester, England. (If you are a student of architecture and are ever in Manchester, you should stop in to see this amazingly renovated old mill building.) I had just finished explaining our PeopleSoft 9.2 roadmap, and Mike Code, PeopleSoft Systems Manager for this innovative staffing company, said, “Mark, the new features you’ve shown us in 9.1/9.2 are very relevant to our business. As we forge ahead with the 9.1 upgrade, the ability to configure a targeted user interface with WorkCenters, Related Actions, Pivot Grids, and Alerts will enable us to satisfy the business that this upgrade is for them and will deliver tangible benefits. In fact, you’ve highlighted that we need to start talking to the business to keep up the momentum to start reviewing the 9.2 upgrade after we get to 9.1, because as much as 9.1 and PeopleTools 8.52 offers, what you’ve shown us for 9.2 is what we’ve envisioned was ultimately possible with our investment in PeopleSoft applications.” We also received valuable feedback about our investment for the Staffing industry when we visited with Hans Wanders, CIO of Randstad (the second largest Staffing company in the world) in the Netherlands. After our visit, Hans noted, “It was very interesting to see how the PeopleSoft applications have developed. I was truly impressed by many of the new developments.” Hans and Mike, sincere thanks for the validation that our team’s hard work and dedication to “respecting the users” is worth the effort! Co-existence of PeopleSoft and Fusion Applications Just Makes Sense As a “product person,” one of the most rewarding things about visiting customers is that they actually want to talk to me. Sometimes, they want to discuss a product area that we need to enhance; other times, they are interested in learning how to extract more value from their applications; and still others, they want to tell me how they are using the applications to drive real value for the business. During this trip, I was very pleased to hear that several of our customers not only thought the co-existence of Fusion applications alongside PeopleSoft applications made sense in theory, but also that they were aggressively looking at how to deploy one or more Fusion applications alongside their PeopleSoft HCM and FSCM applications. The most common deployment plan in the works by three of the organizations is to upgrade to PeopleSoft 9.1 or 9.2, and then adopt one of the new Fusion HCM applications, such as Fusion Performance Management or the full suite of  Fusion Talent Management. For example, during an applications upgrade planning discussion with the staffing company Hays plc., Mark Thomas, who is Hays’ UK IT Director, commented, “We are very excited about where we can go with the latest versions of the PeopleSoft applications in conjunction with Fusion Talent Management.” Needless to say, this news was very encouraging, because it reiterated that our applications investment strategy makes good business sense for our customers. Next Generation Business Intelligence Is the Key to the Future The third, and perhaps most exciting, lesson I learned during this journey is that our audiences already know that the latest generation of Business Intelligence technologies will be the “secret sauce” for organizations to transform business in radical ways. While a number of the organizations we visited on the trip have deployed or are deploying Oracle Business Intelligence Enterprise Edition and the associated analytics applications to provide dashboards of easy-to-understand, user-configurable metrics that help optimize business performance according to current operating procedures, what’s most exciting to them is being able to use Business Intelligence to change the way an organization does business, grows revenue, and makes a profit. In particular, several executives we met asked whether we can help them minimize the need to have perfectly structured data and at the same time generate analytics that improve order fulfillment decision-making. To them, the path to future growth lies in having the ability to analyze unstructured data rapidly and intuitively and leveraging technology’s ability to detect patterns that a human cannot reasonably be expected to see. For illustrative purposes, here is a good example of a business problem where analyzing a combination of structured and unstructured data can produce better results. If you have a resource manager trying to decide which person would be the best fit for an assignment in terms of ensuring (a) client satisfaction, (b) the individual’s satisfaction with the work, (c) least travel distance, and (d) highest margin, you traditionally compare resource qualifications to assignment needs, calculate margins on past work with the client, and measure distances. To perform these comparisons, you are likely to need the organization to have profiles setup, people ranked against profiles, margin targets setup, margins measured, distances setup, distances measured, and more. As you can imagine, this requires organizations to plan and implement data setup, capture, and quality management initiatives to ensure that dependable information is available to support resourcing analysis and decisions. In the fast-paced, tight-budget world in which most organizations operate today, the effort and discipline required to maintain high-quality, structured data like those described in the above example are certainly not desirable and in some cases are not feasible. You can imagine how intrigued our audiences were when I informed them that we are ready to help them analyze volumes of unstructured data, detect trends, and produce recommendations. Our discussions delved into examples of how the firms could leverage Oracle’s Secure Enterprise Search and Endeca technologies to keyword search against, compare, and learn from unstructured resource and assignment data. We also considered examples of how they could employ Oracle Real-Time Decisions to generate statistically significant recommendations based on similar resourcing scenarios that have produced the desired satisfaction and profit margin results. --- Although I had almost no time for sight-seeing during this trip to Europe, I have to say that it may have been one of the most energizing and engaging trips of my career. Showing these dedicated customers how they can give every user a uniquely tailored set of tools and address business problems in ways that have to date been impossible made the journey across the Atlantic more than worth it. If any of these three topics intrigue you, I’d recommend you contact your Oracle applications representative to arrange for more detailed discussions with the appropriate members of our organization.

    Read the article

< Previous Page | 37 38 39 40 41 42  | Next Page >