Search Results

Search found 3846 results on 154 pages for 'life sciences'.

Page 146/154 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Python - Bitmap won't draw/display

    - by Wallter
    I have been working on this project for some time now - it was originally supposed to be a test to see if, using wxPython, I could build a button 'from scratch.' From scratch means: that i would have full control over all the aspects of the button (i.e. controlling the BMP's that are displayed... what the event handlers did... etc.) I have run into several problems (as this is my first major python project.) Now, when the all the code is working for the life of me I can't get an image to display. Basic code - not working dc = wx.BufferedPaintDC(self) dc.SetFont(self.GetFont()) dc.SetBackground(wx.Brush(self.GetBackgroundColour())) dc.Clear() dc.DrawBitmap(wx.Bitmap("/home/wallter/Desktop/Mouseover.bmp"), 100, 100) self.Refresh() self.Update() Full Main.py import wx from Custom_Button import Custom_Button from wxPython.wx import * ID_ABOUT = 101 ID_EXIT = 102 class MyFrame(wx.Frame): def __init__(self, parent, ID, title): wxFrame.__init__(self, parent, ID, title, wxDefaultPosition, wxSize(400, 400)) self.CreateStatusBar() self.SetStatusText("Program testing custom button overlays") menu = wxMenu() menu.Append(ID_ABOUT, "&About", "More information about this program") menu.AppendSeparator() menu.Append(ID_EXIT, "E&xit", "Terminate the program") menuBar = wxMenuBar() menuBar.Append(menu, "&File"); self.SetMenuBar(menuBar) # The call for the 'Experiential button' self.Button1 = Custom_Button(parent, -1, wx.Point(100, 100), wx.Bitmap("/home/wallter/Desktop/Mouseover.bmp"), wx.Bitmap("/home/wallter/Desktop/Normal.bmp"), wx.Bitmap("/home/wallter/Desktop/Click.bmp")) # The following three lines of code are in place to try to get the # Button1 to display (trying to trigger the Paint event (the _onPaint.) # Because that is where the 'draw' functions are. self.Button1.Show(true) self.Refresh() self.Update() # Because the Above three lines of code did not work, I added the # following four lines to trigger the 'draw' functions to test if the # '_onPaint' method actually worked. # These lines do not work. dc = wx.BufferedPaintDC(self) dc.SetFont(self.GetFont()) dc.SetBackground(wx.Brush(self.GetBackgroundColour())) dc.DrawBitmap(wx.Bitmap("/home/wallter/Desktop/Mouseover.bmp"), 100, 100) EVT_MENU(self, ID_ABOUT, self.OnAbout) EVT_MENU(self, ID_EXIT, self.TimeToQuit) def OnAbout(self, event): dlg = wxMessageDialog(self, "Testing the functions of custom " "buttons using pyDev and wxPython", "About", wxOK | wxICON_INFORMATION) dlg.ShowModal() dlg.Destroy() def TimeToQuit(self, event): self.Close(true) class MyApp(wx.App): def OnInit(self): frame = MyFrame(NULL, -1, "wxPython | Buttons") frame.Show(true) self.SetTopWindow(frame) return true app = MyApp(0) app.MainLoop() Full CustomButton.py import wx from wxPython.wx import * class Custom_Button(wx.PyControl): def __init__(self, parent, id, Pos, Over_BMP, Norm_BMP, Push_BMP, **kwargs): wx.PyControl.__init__(self,parent, id, **kwargs) self.Bind(wx.EVT_LEFT_DOWN, self._onMouseDown) self.Bind(wx.EVT_LEFT_UP, self._onMouseUp) self.Bind(wx.EVT_LEAVE_WINDOW, self._onMouseLeave) self.Bind(wx.EVT_ENTER_WINDOW, self._onMouseEnter) self.Bind(wx.EVT_ERASE_BACKGROUND,self._onEraseBackground) self.Bind(wx.EVT_PAINT,self._onPaint) self.pos = Pos self.Over_bmp = Over_BMP self.Norm_bmp = Norm_BMP self.Push_bmp = Push_BMP self._mouseIn = False self._mouseDown = False def _onMouseEnter(self, event): self._mouseIn = True def _onMouseLeave(self, event): self._mouseIn = False def _onMouseDown(self, event): self._mouseDown = True def _onMouseUp(self, event): self._mouseDown = False self.sendButtonEvent() def sendButtonEvent(self): event = wx.CommandEvent(wx.wxEVT_COMMAND_BUTTON_CLICKED, self.GetId()) event.SetInt(0) event.SetEventObject(self) self.GetEventHandler().ProcessEvent(event) def _onEraseBackground(self,event): # reduce flicker pass def Iz(self): dc = wx.BufferedPaintDC(self) dc.DrawBitmap(self.Norm_bmp, 100, 100) def _onPaint(self, event): # The printing functions, they should work... but don't. dc = wx.BufferedPaintDC(self) dc.SetFont(self.GetFont()) dc.SetBackground(wx.Brush(self.GetBackgroundColour())) dc.Clear() dc.DrawBitmap(self.Norm_bmp) # This never printed... I don't know if that means if the EVT # is triggering or what. print '_onPaint' # draw whatever you want to draw # draw glossy bitmaps e.g. dc.DrawBitmap if self._mouseIn: # If the Mouse is over the button dc.DrawBitmap(self.Over_bmp, self.pos) else: # Since the mouse isn't over it Print the normal one # This is adding on the above code to draw the bmp # in an attempt to get the bmp to display; to no avail. dc.DrawBitmap(self.Norm_bmp, self.pos) if self._mouseDown: # If the Mouse clicks the button dc.DrawBitmap(self.Push_bmp, self.pos) This code won't work? I get no BMP displayed why? How do i get one? I've gotten the staticBitmap(...) to display one, but it won't move, resize, or anything for that matter... - it's only in the top left corner of the frame? Note: the frame is 400pxl X 400pxl - and the "/home/wallter/Desktop/Mouseover.bmp"

    Read the article

  • How do you read from a file into an array of struct?

    - by Thomas.Winsnes
    I'm currently working on an assignment and this have had me stuck for hours. Can someone please help me point out why this isn't working for me? struct book { char title[25]; char author[50]; char subject[20]; int callNumber; char publisher[250]; char publishDate[11]; char location[20]; char status[11]; char type[12]; int circulationPeriod; int costOfBook; }; void PrintBookList(struct book **bookList) { int i; for(i = 0; i < sizeof(bookList); i++) { struct book newBook = *bookList[i]; printf("%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d\n",newBook.title, newBook.author, newBook.subject, newBook.callNumber,newBook.publisher, newBook.publishDate, newBook.location, newBook.status, newBook.type,newBook.circulationPeriod, newBook.costOfBook); } } void GetBookList(struct book** bookList) { FILE* file = fopen("book.txt", "r"); struct book newBook[1024]; int i = 0; while(fscanf(file, "%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d", &newBook[i].title, &newBook[i].author, &newBook[i].subject, &newBook[i].callNumber,&newBook[i].publisher, &newBook[i].publishDate, &newBook[i].location, &newBook[i].status, &newBook[i].type,&newBook[i].circulationPeriod, &newBook[i].costOfBook) != EOF) { bookList[i] = &newBook[i]; i++; } /*while(fscanf(file, "%s;%s;%s;%d;%s;%s;%s;%s;%s;%d;%d", &bookList[i].title, &bookList[i].author, &bookList[i].subject, &bookList[i].callNumber, &bookList[i].publisher, &bookList[i].publishDate, &bookList[i].location, &bookList[i].status, &bookList[i].type, &bookList[i].circulationPeriod, &bookList[i].costOfBook) != EOF) { i++; }*/ PrintBookList(bookList); fclose(file); } int main() { struct book *bookList[1024]; GetBookList(bookList); } I get no errors or warnings on compile it should print the content of the file, just like it is in the file. Like this: OperatingSystems Internals and Design principles;William.S;IT;741012759;Upper Saddle River;2009;QA7676063;Available;circulation;3;11200 Communication skills handbook;Summers.J;Accounting;771239216;Milton;2010;BF637C451;Available;circulation;3;7900 Business marketing management:B2B;Hutt.D;Management;741912319;Mason;2010;HF5415131;Available;circulation;3;1053 Patient education rehabilitation;Dreeben.O;Education;745121511;Sudbury;2010;CF5671A98;Available;reference;0;6895 Tomorrow's technology and you;Beekman.G;Science;764102174;Upper Saddle River;2009;QA76B41;Out;reserved;1;7825 Property & security: selected essay;Cathy.S;Law;750131231;Rozelle;2010;D4A3C56;Available;reference;0;20075 Introducing communication theory;Richard.W;IT;714789013;McGraw-Hill;2010;Q360W47;Available;circulation;3;12150 Maths for computing and information technology;Giannasi.F;Mathematics;729890537;Longman;Scientific;1995;QA769M35G;Available;reference;0;13500 Labor economics;George.J;Economics;715784761;McGraw-Hill;2010;HD4901B67;Available;circulation;3;7585 Human physiology:from cells to systems;Sherwood.L;Physiology;707558936;Cengage Learning;2010;QP345S32;Out;circulation;3;11135 bobs;thomas;IT;701000000;UC;1006;QA7548;Available;Circulation;7;5050 but when I run it, it outputs this: OperatingSystems;;;0;;;;;;0;0 Internals;;;0;;;;;;0;0 and;;;0;;;;;;0;0 Design;;;0;;;;;;0;0 principles;William.S;IT;741012759;Upper;41012759;Upper;;0;;;;;;0;0 Saddle;;;0;;;;;;0;0 River;2009;QA7676063;Available;circulation;3;11200;lable;circulation;3;11200;;0;;;;;;0;0 Communication;;;0;;;;;;0;0 Thanks in advance, you're a life saver

    Read the article

  • Domain Validation in a CQRS architecture

    - by Jupaol
    Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion The first approach I considered was: class Customer : EntityBase<Customer> { public void ChangeEmail(string email) { if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”); if(!email.IsEmail()) throw new DomainException(); if(email.Contains(“@mailinator.com”)) throw new DomainException(); } } I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer> { private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver; public bool IsSatisfiedBy(Customer customer) { return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email); } } But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture: class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand> { public void IsValid(Customer entity, ChangeEmailCommand command) { if(!command.Email.HasValidDomain()) throw new DomainException(“...”); } } Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it. Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components... Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options: Pass the validators to each method of my entity Validate my objects externally (from the command handler) I am not happy with the option 1 so I would explain how I would do it with the option 2 class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand> { public void Execute(ChangeEmailCommand command) { private IEnumerable<IDomainInvariantValidator> validators; // here I would get the validators required for this command injected, and in here I would validate them, something like this using (var t = this.unitOfWork.BeginTransaction()) { var customer = this.unitOfWork.Get<Customer>(command.CustomerId); this.validators.ForEach(x =. x.IsValid(customer, command)); // here I know the command is valid // the call to ChangeEmail will fire domain events as needed customer.ChangeEmail(command.Email); t.Commit(); } } } Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation EDIT I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement

    Read the article

  • OSX Server 3, Mac clients binding to OD and Profile Manager failing

    - by dbf
    I've made a setup containing a Mac Mini with OSX Server 3 (Mavericks 10.9.2) using Open Directory and Profile Manager (Mail, etc all set up and working). Now the thing is, internally on the local network, everything works great. Clients can bind to the OD and the users are able to login. I can install trust and settings profiles (either custom or group profiles) and all services in the profiles mentioned are being configured correctly. I can log in and out, hump around and do it a 100 times on different macs with different users, it works. My goal is to make this service publicly. The domain is with a FQDN which I own, for simplicity let's say server.domain.com. Now the only way for me to bind the clients to the OD is using LDAP mapping RCF2307 (without SSL) and a DN suffix of dc=server,dc=domain,dc=com using the Directory Utility. The options from server, or open directory will throw several errors like Connection failed to node '/LDAPv3/server.domain.com (2100). First of all I don't really understand the problem why clients can't bind to the OD like it does locally, with and without SSL (all ports are open, literally all ports are open, not just 389,636 and 1640, wasn't sure if I was missing any). When the clients are using LDAP mapping RFC2307 to bind (without SSL only), clients are able to authenticate, login and even load the Trust profile. But every Settings profile will fail with a Debug Message: Unable to find GUID in user record OD or fail to install saying missing user identification. Is there any way to get this to work without RFC2307? Because there is quite some stuff missing when using RFC2307 and not pull the mapping from the server or use open directory. Is this setup even possible? Or should I use VPN to authenticate with the OD? The network setup is a Modem/Router (DHCP off) with WAN NATted to an Airport Extreme (Using DHCP+NAT). The AE does notify with a double NAT message but I haven't had any problems with it on any other service. So WAN - 192.168.2.220 (static), AE - 10.0.1.* (dhcp) Output of DIG from the outside using dig server.domain.com ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 77 IN A 91.50.*.* (valid WAN IP) ;; SERVER 172.*.*.1#53(172.*.*.1) (iPhone) DIG locally from a client and server (same output) ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 10800 IN A 10.0.1.11 ;; AUTHORITY SECTION: server.domain.com. 10800 IN NS domain.com. (used for email send in relay) server.domain.com. 10800 IN NS server.domain.com. ;; SERVER 10.0.1.11#53(10.0.1.11) Are there any things I should check? Only have OSX. -- double NAT issue, plugged in the server directly on the Modem/Router with a static IP and issue remains. Guess that rules out the double NAT thing. -- changeip -checkhostname comes with There is nothing to change, e.g. success. Primary address = 10.0.1.11 Current HostName = server.domain.com DNS HostName = server.domain.com For now, I've made a workaround by using an admin account that forces a permanent VPN connection on boot. That means before it comes to the login, a connection is already made or underway. I will continue this post when I have more time, also locating all the necessary .log files of each application involved. I have some suspicions but have to debug a bit more when I have more time on my hands .. Unless, of course, I get sidetracked with having a life. Which is arguably not very likely. krypted.com

    Read the article

  • developing maven plugin, how to exclude bitkeeper files

    - by Denali
    Hi There, I am trying to write my first maven plugin. I'd like to exclude all the java files related to the source repository I'm using, which is BitKeeper. These files live in directories called SCCS. I can't for the life of me figure out how to do this. When I add the maven-compile-plugin with excludes data, it works (the bk files are excluded) if I specify mvn compiler:compile. But this is not binding to the compile phase. So that when I run mvn compile, it blows up trying to compile a source control specific java file. Any help or pointers appreciated. Another thing to note: Everything works perfectly if I change the packaging from "maven-plugin" to "jar", which of course, I can't do permanently since this is a maven plugin I am trying to write. I'm sorry if this is answered elsewhere. I've looked around for several hours here and through the maven docs, but everything on this topic seems to be related to writing code which will be packaged in jars, not maven plugins. Here's my pom.xml: <project> <modelVersion>4.0.0</modelVersion> <groupId>com.mycomp.mygroup</groupId> <artifactId>special-persistence-plugin</artifactId> <packaging>maven-plugin</packaging> <version>1.0-SNAPSHOT</version> <name>Special Persistence Plugin</name> <dependencies> <dependency> <groupId>org.apache.maven</groupId> <artifactId>maven-plugin-api</artifactId> <version>2.0</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> <encoding>UTF-8</encoding> <excludes> <exclude>**/SCCS/**/*.java</exclude> </excludes> <phase>compile</phase> <goals> <goal>compiler:compile</goal> </goals> </configuration> </plugin> </plugins> </build> </project> Thank you to anyone with ideas about this, -Denali

    Read the article

  • Action bar with Search View. Reverse compatibility issues

    - by suresh
    I am building a sample app to demonstrate SearchView with filter and other Action Bar items. I am able to successfully run this app on 4.2(Nexus 7). But it is not running on 2.3. I googled about the issue. Came to know that i should use SherLock Action bar. I just went to http://actionbarsherlock.com/download.html, downloaded the zip file and added the library as informed in the video: http://www.youtube.com/watch?v=4GJ6yY1lNNY&feature=player_embedde by WiseManDesigns. But still I am unable to figure out the issue. Here is my code: SearchViewActionBar.java public class SearchViewActionBar extends Activity implements SearchView.OnQueryTextListener { private SearchView mSearchView; private TextView mStatusView; int mSortMode = -1; private ListView mListView; private ArrayAdapter<String> mAdapter; protected CharSequence[] _options = { "Wild Life", "River", "Hill Station", "Temple", "Bird Sanctuary", "Hill", "Amusement Park"}; protected boolean[] _selections = new boolean[ _options.length ]; private final String[] mStrings = Cheeses.sCheeseStrings; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); getWindow().requestFeature(Window.FEATURE_ACTION_BAR); setContentView(R.layout.activity_main); // mStatusView = (TextView) findViewById(R.id.status_text); // mSearchView = (SearchView) findViewById(R.id.search_view); mListView = (ListView) findViewById(R.id.list_view); mListView.setAdapter(mAdapter = new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, mStrings)); mListView.setTextFilterEnabled(true); //setupSearchView(); } private void setupSearchView() { mSearchView.setIconifiedByDefault(true); mSearchView.setOnQueryTextListener(this); mSearchView.setSubmitButtonEnabled(false); //mSearchView.setQueryHint(getString(R.string.cheese_hunt_hint)); } @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.searchview_in_menu, menu); MenuItem searchItem = menu.findItem(R.id.action_search); mSearchView = (SearchView) searchItem.getActionView(); //setupSearchView(searchItem); setupSearchView(); return true; } @Override public boolean onPrepareOptionsMenu(Menu menu) { if (mSortMode != -1) { Drawable icon = menu.findItem(mSortMode).getIcon(); menu.findItem(R.id.action_sort).setIcon(icon); } return super.onPrepareOptionsMenu(menu); } @Override public boolean onOptionsItemSelected(MenuItem item) { String c="Category"; String s=(String) item.getTitle(); if(s.equals(c)) { System.out.println("same"); showDialog( 0 ); } //System.out.println(s); Toast.makeText(this, "Selected Item: " + item.getTitle(), Toast.LENGTH_SHORT).show(); return true; } protected Dialog onCreateDialog( int id ) { return new AlertDialog.Builder( this ) .setTitle( "Category" ) .setMultiChoiceItems( _options, _selections, new DialogSelectionClickHandler() ) .setPositiveButton( "SAVE", new DialogButtonClickHandler() ) .create(); } public class DialogSelectionClickHandler implements DialogInterface.OnMultiChoiceClickListener { public void onClick( DialogInterface dialog, int clicked, boolean selected ) { Log.i( "ME", _options[ clicked ] + " selected: " + selected ); } } public class DialogButtonClickHandler implements DialogInterface.OnClickListener { public void onClick( DialogInterface dialog, int clicked ) { switch( clicked ) { case DialogInterface.BUTTON_POSITIVE: printSelectedPlanets(); break; } } } protected void printSelectedPlanets() { for( int i = 0; i < _options.length; i++ ){ Log.i( "ME", _options[ i ] + " selected: " + _selections[i] ); } } public void onSort(MenuItem item) { mSortMode = item.getItemId(); invalidateOptionsMenu(); } public boolean onQueryTextChange(String newText) { if (TextUtils.isEmpty(newText)) { mListView.clearTextFilter(); } else { mListView.setFilterText(newText.toString()); } return true; } public boolean onQueryTextSubmit(String query) { mStatusView.setText("Query = " + query + " : submitted"); return false; } public boolean onClose() { mStatusView.setText("Closed!"); return false; } protected boolean isAlwaysExpanded() { return false; } }

    Read the article

  • Bewildering SegFault involving STL sort algorithm.

    - by just_wes
    Hello everybody, I am completely perplexed at a seg fault that I seem to be creating. I have: vector<unsigned int> words; and global variable string input; I define my custom compare function: bool wordncompare(unsigned int f, unsigned int s) { int n = k; while (((f < input.size()) && (s < input.size())) && (input[f] == input[s])) { if ((input[f] == ' ') && (--n == 0)) { return false; } f++; s++; } return true; } When I run the code: sort(words.begin(), words.end()); The program exits smoothly. However, when I run the code: sort(words.begin(), words.end(), wordncompare); I generate a SegFault deep within the STL. The GDB back-trace code looks like this: #0 0x00007ffff7b79893 in std::string::size() const () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/libstdc++.so.6 #1 0x0000000000400f3f in wordncompare (f=90, s=0) at text_gen2.cpp:40 #2 0x000000000040188d in std::__unguarded_linear_insert<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, unsigned int, bool (*)(unsigned int, unsigned int)> (__last=..., __val=90, __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1735 #3 0x00000000004018df in std::__unguarded_insertion_sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1812 #4 0x0000000000402562 in std::__final_insertion_sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:1845 #5 0x0000000000402c20 in std::sort<__gnu_cxx::__normal_iterator<unsigned int*, std::vector<unsigned int, std::allocator<unsigned int> > >, bool (*)(unsigned int, unsigned int)> (__first=..., __last=..., __comp=0x400edc <wordncompare(unsigned int, unsigned int)>) at /usr/lib/gcc/x86_64-pc-linux-gnu/4.3.4/include/g++-v4/bits/stl_algo.h:4822 #6 0x00000000004012d2 in main (argc=1, args=0x7fffffffe0b8) at text_gen2.cpp:70 I have similar code in another program, but in that program I am using a vector instead of vector. For the life of me I can't figure out what I'm doing wrong. Thanks!

    Read the article

  • Can anyone help convert this VB Webservice?

    - by CraigJSte
    I can't figure it out for the life of me.. I'm using SQL DataSet Query to iterate to a Class Object which acts as a Data Model for Flex... Initially I used VB.net but now need to convert to C#.. This conversion is done except for the last section where I create a DataRow arow and then try to add the DataSet Values to the Class (Results Class)... I get an error message.. 'VTResults.Results.Ticker' is inaccessible due to its protection level (this is down at the bottom) using System; using System.Web; using System.Collections; using System.Web.Services; using System.Web.Services.Protocols; using System.Data; using System.Data.SqlClient; using System.Configuration; /// <summary> /// Summary description for VTResults /// </summary> [WebService(Namespace = "http://velocitytrading.net/ResultsVT.aspx")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class VTResults : System.Web.Services.WebService { public class Results { string Ticker; string BuyDate; decimal Buy; string SellDate; decimal Sell; string Profit; decimal Period; } [WebMethod] public Results[] GetResults() { string conn = ConfigurationManager.ConnectionStrings["LocalSqlServer"].ConnectionString; SqlConnection myconn = new SqlConnection(conn); SqlCommand mycomm = new SqlCommand(); SqlDataAdapter myda = new SqlDataAdapter(); DataSet myds = new DataSet(); mycomm.CommandType = CommandType.StoredProcedure; mycomm.Connection = myconn; mycomm.CommandText = "dbo.Results"; myconn.Open(); myda.SelectCommand = mycomm; myda.Fill(myds); myconn.Close(); myconn.Dispose(); int i = 0; Results[] dts = new Results[myds.Tables[0].Rows.Count]; foreach(DataRow arow in myds.Tables[0].Rows) { dts[i] = new Results(); dts[i].Ticker = arow["Ticker"]; dts[i].BuyDate = arow["BuyDate"]; dts[1].Buy = arow["Buy"]; dts[i].SellDate = arow["SellDate"]; dts[i].Sell = arow["Sell"]; dts[i].Profit = arow["Profit"]; dts[i].Period = arow["Period"]; i+=1; } return dts; } } The VB.NET WEBSERVICE that runs fine which I am trying to convert to C# is here. Imports System.Web Imports System.Web.Services Imports System.Web.Services.Protocols Imports System.Data Imports System.Data.SqlClient <WebService(Namespace:="http://localhost:2597/Results/ResultsVT.aspx")> _ <WebServiceBinding(ConformsTo:=WsiProfiles.BasicProfile1_1)> _ <Global.Microsoft.VisualBasic.CompilerServices.DesignerGenerated()> _ Public Class VTResults Inherits System.Web.Services.WebService Public Class Results Public Ticker As String Public BuyDate As String Public Buy As Decimal Public SellDate As String Public Sell As Decimal Public Profit As String Public Period As Decimal End Class <WebMethod()> _ Public Function GetResults() As Results() Try Dim conn As String = ConfigurationManager.ConnectionStrings("LocalSqlServer").ConnectionString Dim myconn = New SqlConnection(conn) Dim mycomm As New SqlCommand Dim myda As New SqlDataAdapter Dim myds As New DataSet mycomm.CommandType = CommandType.StoredProcedure mycomm.Connection = myconn mycomm.CommandText = "dbo.Results" myconn.Open() myda.SelectCommand = mycomm myda.Fill(myds) myconn.Close() myconn.Dispose() Dim i As Integer i = 0 Dim dts As Results() = New Results(myds.Tables(0).Rows.Count - 1) {} Dim aRow As DataRow For Each aRow In myds.Tables(0).Rows dts(i) = New Results dts(i).Ticker = aRow("Ticker") dts(i).BuyDate = aRow("BuyDate") dts(i).Buy = aRow("Buy") dts(i).SellDate = aRow("SellDate") dts(i).Sell = aRow("Sell") dts(i).Profit = aRow("Profit") dts(i).Period = aRow("Period") i += 1 Next Return dts Catch ex As DataException Throw ex End Try End Function End Class

    Read the article

  • What can I do to improve a project if there is a no-listening situation. Developers vs Management

    - by NazGul
    Hi all, I hope that I'm not the only one and I can get a answer from someone with more experience than me, so I can think cleaner and I don't get depressed with this developer's life. I'm working as developer for a small company three years now. In that three years I'm working in the same project and sincerely, I think this project could be used as a CASE STUDY because it has all the situations that cannot happen in a project and that makes a project fails. To begin with, and I believe you've already noticed, the project has 3 years already (develoment only) and is still unfinished, because in every meeting there is a "new priority" ,or a "new problem" to be solve or a "new feature" to be add. So, first problem is no target set. How can you know when something is finished if you don't know what you want? I understand Management, because they see an oportunity and try to get that, but I don't understand how can they not see (or hear us) that they'll lose all they already have and what they'll eventually get. Second, there is no team group. My team consists of three people, a Senior Developer, a DBA and, finally, I for all the work (support, testing, new features, bug fixing, meeting, projet management of clients, etc) aka Junior Developer. The first (senior developer), does not perform any tests on his changes, so, most of the time, his changes give us problems (us = me, since I'm the one who will fix it). The second (DBA) is an uncompromising person and you can not talk to him, believe me, I tried! In his view, everything he does is fantastic... even if it is the most complicated to make it... And he does everything he wants, even if we need that only for 5 months later and would help some extra-hand to do the things we have to do for now. As you can see, there is very hard to work with no help... Third, there is no testings. Every... I repeat, Every release of the project, the customers wants to kill us, because there is a lot of bugs. Management? They say that they want tests before the release. Us? We say the same. Time? No time. Management? There is always some time to open the application and click in some buttons. Us? Try to explain that it is not so simple. Management doesn't care... end of story. Actually, must of the bugs could be avoid with a rigorous work... Some people just want to do the show to the Management. "Did you ask for this? Cool, it's done. Bugs? The Do-all-the-work guy will solve." Unfortunally for me, sometimes the Do-all-the-work also has to finish it. And to makes this all better, I'm the person who will listen the complaints from the customers. Cool, huh? I know, everyone makes mistakes. But there is mistakes and mistakes... To complete, in the Management view, "the problem is the lack of an individual project management", because we cannot do all the stuff they ask, even if there is no PM for the project itself. And ask us to work overtime without any reward... I do say all this stuff to the management and others members, but by telling this, the I'm the bad guy, the guy who is complain when everything is going well... but we need to work overtime... sigh What can I do to make it works? Anyone has a situation like this, what did you do? I hope you could understand my problem, my English is a little rusty. Thanks.

    Read the article

  • SQL Server 2008: Using Multiple dts Ranges to Build a Set of Dates

    - by raoulcousins
    I'm trying to build a query for a medical database that counts the number of patients that were on at least one medication from a class of medications (the medications listed below in the FAST_MEDS CTE) and had either: 1) A diagnosis of myopathy (the list of diagnoses in the FAST_DX CTE) 2) A CPK lab value above 1000 (the lab value in the FAST_LABS CTE) and this diagnosis or lab happened AFTER a patient was on a statin. The query I've included below does that under the assumption that once a patient is on a statin, they're on a statin forever. The first CTE collects the ids of patients that were on a statin along with the first date of their diagnosis, the second those with a diagnosis, and the third those with a high lab value. After this I count those that match the above criteria. What I would like to do is drop the assumption that once a patient is on a statin, they're on it for life. The table edw_dm.patient_medications has a column called start_dts and end_dts. This table has one row for each prescription written, with start_dts and end_dts denoting the start and end date of the prescription. End_dts could be null, which I'll take to assume that the patient is currently on this medication (it could be a missing record, but I can't do anything about this). If a patient is on two different statins, the start and ends dates can overlap, and there may be multiple records of the same medication for a patient, as in a record showing 3-11-2000 to 4-5-2003 and another for the same patient showing 5-6-2007 to 7-8-2009. I would like to use these two columns to build a query where I'm only counting the patients that had a lab value or diagnosis done during a time when they were already on a statin, or in the first n (say 3) months after they stopped taking a statin. I'm really not sure how to go about rewriting the first CTE to get this information and how to do the comparison after the CTEs are built. I know this is a vague question, but I'm really stumped. Any ideas? As always, thank you in advance. Here's the current query: WITH FAST_MEDS AS ( select distinct statins.mrd_pt_id, min(year(statins.order_dts)) as statin_yr from edw_dm.patient_medications as statins inner join mrd.medications as mrd on statins.mrd_med_id = mrd.mrd_med_id WHERE mrd.generic_nm in ( 'Lovastatin (9664708500)', 'lovastatin-niacin', 'Lovastatin/Niacin', 'Lovastatin', 'Simvastatin (9678583966)', 'ezetimibe-simvastatin', 'niacin-simvastatin', 'ezetimibe/Simvastatin', 'Niacin/Simvastatin', 'Simvastatin', 'Aspirin Buffered-Pravastatin', 'aspirin-pravastatin', 'Aspirin/Pravastatin', 'Pravastatin', 'amlodipine-atorvastatin', 'Amlodipine/atorvastatin', 'atorvastatin', 'fluvastatin', 'rosuvastatin' ) and YEAR(statins.order_dts) IS NOT NULL and statins.mrd_pt_id IS NOT NULL group by statins.mrd_pt_id ) select * into #meds from FAST_MEDS ; --return patients who had a diagnosis in the list and the year that --diagnosis was given with FAST_DX AS ( SELECT pd.mrd_pt_id, YEAR(pd.init_noted_dts) as init_yr FROM edw_dm.patient_diagnoses as pd inner join mrd.diagnoses as mrd on pd.mrd_dx_id = mrd.mrd_dx_id and mrd.icd9_cd in ('728.89','729.1','710.4','728.3','729.0','728.81','781.0','791.3') ) select * into #dx from FAST_DX; --return patients who had a high cpk value along with the year the cpk --value was taken with FAST_LABS AS ( SELECT pl.mrd_pt_id, YEAR(pl.order_dts) as lab_yr FROM edw_dm.patient_labs as pl inner join mrd.labs as mrd on pl.mrd_lab_id = mrd.mrd_lab_id and mrd.lab_nm = 'CK (CPK)' WHERE pl.lab_val between 1000 AND 999998 ) select * into #labs from FAST_LABS; -- count the number of patients who had a lab value or a medication -- value taken sometime AFTER their initial statin diagnosis select count(distinct p.mrd_pt_id) as ct from mrd.patient_demographics as p join #meds as m on p.mrd_pt_id = m.mrd_pt_id AND ( EXISTS ( SELECT 'A' FROM #labs l WHERE p.mrd_pt_id = l.mrd_pt_id and l.lab_yr >= m.statin_yr ) OR EXISTS( SELECT 'A' FROM #dx d WHERE p.mrd_pt_id = d.mrd_pt_id AND d.init_yr >= m.statin_yr ) )

    Read the article

  • Windows 7: Search indexing is stuck

    - by Ricket
    When I open Indexing Options, it says: 4,317 items indexed Indexing in progress. Search results might not be complete during this time. It's stuck at 4,317 though; no more items have been indexed. Worst of all, SearchIndexer.exe is taking up 100% CPU (well, 50%, but I have a dual core CPU; it's taking up all processing power it can). It is not causing hard drive activity though. I tried clicking "Troubleshoot search and indexing" at the bottom of the Indexing Options window, but it couldn't find any problem. I've also tried the repair registry key that several websites suggest; I change HKLM\SOFTWARE\Microsoft\Windows Search SetupCompletedSuccessfully to 0 and restarted the computer, and it apparently repaired because it flipped back to 1, but the same problem continues to occur. It's reducing the battery life of my laptop and making it really hot so that my fans are running all the time. I've had to disable the Windows Search service. How can I fix this? Do I need to just flat-out reformat my computer? Update: I've tried rebuilding a couple times. There's nothing unusual about the locations I have to index, and I don't have any downloads in progress or anything like that. I don't see any reason why it stopped, and I noticed it much too late to do a system restore. At this point, I'm hoping someone will offer up some secret answer that will fix the problem, thus the bounty. Another update: I tried starting the service again, just to let it try yet again. It seemed okay at first (Indexing Options showed it operating at reduced speed due to user activity, and the number of files was going up). A while later I checked, and the service had stopped. Event viewer revealed some errors like this: Log Name: Application Source: Application Error Date: 2/1/2010 7:34:23 PM Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Computer: ricky-win7 Description: Faulting application name: SearchIndexer.exe, version: 7.0.7600.16385, time stamp: 0x4a5bcdd0 Faulting module name: NLSData0007.dll, version: 6.1.7600.16385, time stamp: 0x4a5bda88 Exception code: 0xc0000005 Fault offset: 0x002141ba Faulting process id: 0x13a0 Faulting application start time: 0x01caa39f2a70ec02 Faulting application path: C:\Windows\system32\SearchIndexer.exe Faulting module path: C:\Windows\System32\NLSData0007.dll Report Id: b4f7a7ae-0f92-11df-87fc-e5d65d8794c2 Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Application Error" /> <EventID Qualifiers="0">1000</EventID> <Level>2</Level> <Task>100</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-02-02T00:34:23.000000000Z" /> <EventRecordID>10689</EventRecordID> <Channel>Application</Channel> <Computer>ricky-win7</Computer> <Security /> </System> <EventData> <Data>SearchIndexer.exe</Data> <Data>7.0.7600.16385</Data> <Data>4a5bcdd0</Data> <Data>NLSData0007.dll</Data> <Data>6.1.7600.16385</Data> <Data>4a5bda88</Data> <Data>c0000005</Data> <Data>002141ba</Data> <Data>13a0</Data> <Data>01caa39f2a70ec02</Data> <Data>C:\Windows\system32\SearchIndexer.exe</Data> <Data>C:\Windows\System32\NLSData0007.dll</Data> <Data>b4f7a7ae-0f92-11df-87fc-e5d65d8794c2</Data> </EventData> </Event> If you are having the same error and arrived here from a Google search, please comment or add an answer detailing your progress on this, if any...

    Read the article

  • C# Can anyone help convert this VB Webservice?

    - by CraigJSte
    I can't figure it out for the life of me.. I'm using SQL DataSet Query to iterate to a Class Object which acts as a Data Model for Flex... Initially I used VB.net but now need to convert to C#.. This conversion is done except for the last section where I create a DataRow arow and then try to add the DataSet Values to the Class (Results Class)... I get an error message.. 'VTResults.Results.Ticker' is inaccessible due to its protection level (this is down at the bottom) using System; using System.Web; using System.Collections; using System.Web.Services; using System.Web.Services.Protocols; using System.Data; using System.Data.SqlClient; using System.Configuration; /// <summary> /// Summary description for VTResults /// </summary> [WebService(Namespace = "http://velocitytrading.net/ResultsVT.aspx")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class VTResults : System.Web.Services.WebService { public class Results { string Ticker; string BuyDate; decimal Buy; string SellDate; decimal Sell; string Profit; decimal Period; } [WebMethod] public Results[] GetResults() { string conn = ConfigurationManager.ConnectionStrings["LocalSqlServer"].ConnectionString; SqlConnection myconn = new SqlConnection(conn); SqlCommand mycomm = new SqlCommand(); SqlDataAdapter myda = new SqlDataAdapter(); DataSet myds = new DataSet(); mycomm.CommandType = CommandType.StoredProcedure; mycomm.Connection = myconn; mycomm.CommandText = "dbo.Results"; myconn.Open(); myda.SelectCommand = mycomm; myda.Fill(myds); myconn.Close(); myconn.Dispose(); int i = 0; Results[] dts = new Results[myds.Tables[0].Rows.Count]; foreach(DataRow arow in myds.Tables[0].Rows) { dts[i] = new Results(); dts[i].Ticker = arow["Ticker"]; dts[i].BuyDate = arow["BuyDate"]; dts[1].Buy = arow["Buy"]; dts[i].SellDate = arow["SellDate"]; dts[i].Sell = arow["Sell"]; dts[i].Profit = arow["Profit"]; dts[i].Period = arow["Period"]; i+=1; } return dts; } } The VB.NET WEBSERVICE that runs fine which I am trying to convert to C# is here. Imports System.Web Imports System.Web.Services Imports System.Web.Services.Protocols Imports System.Data Imports System.Data.SqlClient <WebService(Namespace:="http://localhost:2597/Results/ResultsVT.aspx")> _ <WebServiceBinding(ConformsTo:=WsiProfiles.BasicProfile1_1)> _ <Global.Microsoft.VisualBasic.CompilerServices.DesignerGenerated()> _ Public Class VTResults Inherits System.Web.Services.WebService Public Class Results Public Ticker As String Public BuyDate As String Public Buy As Decimal Public SellDate As String Public Sell As Decimal Public Profit As String Public Period As Decimal End Class <WebMethod()> _ Public Function GetResults() As Results() Try Dim conn As String = ConfigurationManager.ConnectionStrings("LocalSqlServer").ConnectionString Dim myconn = New SqlConnection(conn) Dim mycomm As New SqlCommand Dim myda As New SqlDataAdapter Dim myds As New DataSet mycomm.CommandType = CommandType.StoredProcedure mycomm.Connection = myconn mycomm.CommandText = "dbo.Results" myconn.Open() myda.SelectCommand = mycomm myda.Fill(myds) myconn.Close() myconn.Dispose() Dim i As Integer i = 0 Dim dts As Results() = New Results(myds.Tables(0).Rows.Count - 1) {} Dim aRow As DataRow For Each aRow In myds.Tables(0).Rows dts(i) = New Results dts(i).Ticker = aRow("Ticker") dts(i).BuyDate = aRow("BuyDate") dts(i).Buy = aRow("Buy") dts(i).SellDate = aRow("SellDate") dts(i).Sell = aRow("Sell") dts(i).Profit = aRow("Profit") dts(i).Period = aRow("Period") i += 1 Next Return dts Catch ex As DataException Throw ex End Try End Function End Class

    Read the article

  • Conflict between some JavaScript and jQuery on same page

    - by hollyb
    I am using a JavaScript function and some jQuery to perform two actions on a page. The first is a simple JS function to hide/show divs and change the active state of a tab: This is the JS that show/hides divs and changes the active state on some tabs: var ids=new Array('section1','section2','section3'); function switchid(id, el){ hideallids(); showdiv(id); var li = el.parentNode.parentNode.childNodes[0]; while (li) { if (!li.tagName || li.tagName.toLowerCase() != "li") li = li.nextSibling; // skip the text node if (li) { li.className = ""; li = li.nextSibling; } } el.parentNode.className = "active"; } function hideallids(){ //loop through the array and hide each element by id for (var i=0;i<ids.length;i++){ hidediv(ids[i]); } } function hidediv(id) { //safe function to hide an element with a specified id document.getElementById(id).style.display = 'none'; } function showdiv(id) { //safe function to show an element with a specified id document.getElementById(id).style.display = 'block'; } The html: <ul> <li class="active"><a onclick="switchid('section1', this);return false;">ONE</a></li> <li><a onclick="switchid('section2', this);return false;">TWO</a></li> <li><a onclick="switchid('section3', this);return false;">THREE</a></li> </ul> <div id="section1" style="display:block;">TEST</div> <div id="section2" style="display:none;">TEST 2</div> <div id="section3" style="display:none;">TEST 3</div> Now the problem.... I've added the jQuery image gallery called galleria to one of the tabs. The gallery works great when it resides in the div that is intially set to display:block. However, when it is in one of the divs that is set to display: none; part of the gallery doesn't work when the div is toggled to be visible. Specifically, the following css ceases to be written (this is created by galleria jQuery): element.style { display:block; height:50px; margin-left:-17px; width:auto; } For the life of me, I can't figure out why the gallery fails when it's div is set to display: none. Since this declaration is overwritten when a tab is clicked (via the Javascript functions above), why would this cause a problem? As I mentioned, it works perfectly when it lives the in display: block; div. Any ideas? I don't expect anybody to be familiar with the jQuery galleria image gallery... but perhaps an idea of how one might repair this problem? Thanks!

    Read the article

  • Android app hanging, sometimes until Force Close / Wait dialog appears

    - by fredley
    I'm making an app that records uncompressed (wav format) audio. I'm using this class to actually record the audio. Currently, my application records fine (I can play the file), however when I click the button to stop the recording, the app hangs for 10 seconds or so, with no log output or any signs of life. Finally it comes round, dumps a load of errors into the log, updates the UI etc. I'm using AsyncTasks to try and avoid this kind of thing but it's not working. Here's my code: //Called on clicks of the record button. rar is the instance of RehearsalAudioRecorder private OnClickListener RecordListener = new OnClickListener(){ @Override public void onClick(View v) { Log.d("Record","Click"); if (recording){ new stopRecordingTask().execute(rar,null,null); startStop.setText("Record"); statusBar.setText("Recording Finished, ready to Encode"); }else{ recording = true; new startRecordingTask().execute(rar,null,null); startStop.setText("Stop"); statusBar.setText("Recording Started"); } } }; private class startRecordingTask extends AsyncTask<RehearsalAudioRecorder,Void,Void>{ @Override protected Void doInBackground(RehearsalAudioRecorder... rs) { RehearsalAudioRecorder r = rs[0]; r.setOutputFile("/sdcard/rarOut.wav"); r.prepare(); r.start(); return null; } } private class stopRecordingTask extends AsyncTask<RehearsalAudioRecorder,Void,Void>{ @Override protected Void doInBackground(RehearsalAudioRecorder... rs) { RehearsalAudioRecorder r = rs[0]; r.stop(); r.reset(); return null; } } In Logcat, I always get output like this, which has me stumped. I have no idea what's causing it (I'm logging the RehearsalAudioRecorder class, and it's being started/stopped correctly by the button clicks. This output occurs after the log output for the button click and correct stop() method call) 12-19 11:59:11.172: ERROR/AudioRecord-JNI(22662): Unable to retrieve AudioRecord object, can't record 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): Error occured in updateListener, recording is aborted 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): stop() called on illegal state: STOPPED 12-19 11:59:11.172: ERROR/AudioRecord-JNI(22662): Unable to retrieve AudioRecord object, can't record 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): Error occured in updateListener, recording is aborted 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): stop() called on illegal state: ERROR 12-19 11:59:11.172: ERROR/AudioRecord-JNI(22662): Unable to retrieve AudioRecord object, can't record 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): Error occured in updateListener, recording is aborted 12-19 11:59:11.172: ERROR/uk.ac.cam.tfmw2.steg.RehearsalAudioRecorder(22662): stop() called on illegal state: ERROR ... 10 or more times I've been fiddling with this all day and I'm not getting anywhere, any input would be greatly appreciated. Update I've replace the AsyncTasks with Threads, still doesn't work, the app completely hangs when I click record, despite the fact the Log indicates there's nothing going on in the main thread. Still completely stumped.

    Read the article

  • Very different font sizes across browsers

    - by Yang
    Chrome/WebKit and Firefox have different rendering engines which render fonts differently, in particular with differing dimensions. This isn't too surprising, but what's surprising is the magnitude of some of the differences. I can always tweak individual elements on a page to be more similar, but that's tedious, to say the least. I've been searching for more systematic solutions, but many resources (e.g. SO answers) simply say "use a reset package." While I'm sure this fixes a bunch of other things like padding and spacing, it doesn't seem to make any difference for font dimensions. For instance, if I take the reset package from http://html5reset.org/, I can show pretty big differences (note the layout dimensions shown in the inspectors). [The images below are actually higher res than shown/resized in this answer.] <h1 style="font-size:64px; background-color: #eee;">Article Header</h1> With Helvetica, Chrome is has the shorter height instead. <h1 style="font-size:64px; background-color: #eee; font-family: Helvetica">Article Header</h1> Using a different font, Chrome again renders a much taller font, but additionally the letter spacing goes haywire (probably due to the boldification of the font): <style> @font-face { font-family: "MyriadProRegular"; src: url("fonts/myriadpro-regular-webfont.eot"); src: local("?"), url("fonts/myriadpro-regular-webfont.woff") format("woff"), url("fonts/myriadpro-regular-webfont.ttf") format("truetype"), url("fonts/myriadpro-regular-webfont.svg#webfonteknRmz0m") format("svg"); font-weight: normal; font-style: normal; } @font-face { font-family: "MyriadProLight"; src: url("fonts/myriadpro-light-webfont.eot"); src: local("?"), url("fonts/myriadpro-light-webfont.woff") format("woff"), url("fonts/myriadpro-light-webfont.ttf") format("truetype"), url("fonts/myriadpro-light-webfont.svg#webfont2SBUkD9p") format("svg"); font-weight: normal; font-style: normal; } @font-face { font-family: "MyriadProSemibold"; src: url("fonts/myriadpro-semibold-webfont.eot"); src: local("?"), url("fonts/myriadpro-semibold-webfont.woff") format("woff"), url("fonts/myriadpro-semibold-webfont.ttf") format("truetype"), url("fonts/myriadpro-semibold-webfont.svg#webfontM3ufnW4Z") format("svg"); font-weight: normal; font-style: normal; } </style> ... <h1 style="font-size:64px; background-color: #eee; font-family: Helvetica">Article Header</h1> I've tried a few resets/normalize packages to no avail. I just wanted to confirm here that this is indeed a fact of life (even omitting the more glaring offenders like IE and mobile) and I'm not missing some super-awesome solution to this mess.

    Read the article

  • What is there so useful in the Decorator Pattern? My example doesn't work

    - by Green
    The book says: The decorator pattern can be used to extend (decorate) the functionality of a certain object I have a rabbit animal. And I want my rabbit to have, for example, reptile skin. Just want to decorate a common rabbit with reptile skin. I have the code. First I have abstract class Animal with everythig that is common to any animal: abstract class Animal { abstract public function setSleep($hours); abstract public function setEat($food); abstract public function getSkinType(); /* and more methods which for sure will be implemented in any concrete animal */ } I create class for my rabbit: class Rabbit extends Animal { private $rest; private $stomach; private $skinType = "hair"; public function setSleep($hours) { $this->rest = $hours; } public function setFood($food) { $this->stomach = $food; } public function getSkinType() { return $this->$skinType; } } Up to now everything is OK. Then I create abstract AnimalDecorator class which extends Animal: abstract class AnimalDecorator extends Animal { protected $animal; public function __construct(Animal $animal) { $this->animal = $animal; } } And here the problem comes. Pay attention that AnimalDecorator also gets all the abstract methods from the Animal class (in this example just two but in real can have many more). Then I create concrete ReptileSkinDecorator class which extends AnimalDecorator. It also has those the same two abstract methods from Animal: class ReptileSkinDecorator extends AnimalDecorator { public function getSkinColor() { $skin = $this->animal->getSkinType(); $skin = "reptile"; return $skin; } } And finaly I want to decorate my rabbit with reptile skin: $reptileSkinRabbit = ReptileSkinDecorator(new Rabbit()); But I can't do this because I have two abstract methods in ReptileSkinDecorator class. They are: abstract public function setSleep($hours); abstract public function setEat($food); So, instead of just re-decorating only skin I also have to re-decorate setSleep() and setEat(); methods. But I don't need to. In all the book examples there is always ONLY ONE abstract method in Animal class. And of course it works then. But here I just made very simple real life example and tried to use the Decorator pattern and it doesn't work without implementing those abstract methods in ReptileSkinDecorator class. It means that if I want to use my example I have to create a brand new rabbit and implement for it its own setSleep() and setEat() methods. OK, let it be. But then this brand new rabbit has the instance of commont Rabbit I passed to ReptileSkinDecorator: $reptileSkinRabbit = ReptileSkinDecorator(new Rabbit()); I have one common rabbit instance with its own methods in the reptileSkinRabbit instance which in its turn has its own reptileSkinRabbit methods. I have rabbit in rabbit. But I think I don't have to have such possibility. I don't understand the Decarator pattern right way. Kindly ask you to point on any mistakes in my example, in my understanding of this pattern. Thank you.

    Read the article

  • How to design a data model that deals with (real) contracts?

    - by Geoffrey
    I was looking for some advice on designing a data model for contract administration. The general life cycle of a contract is thus: Contract is created and in a "draft" state. It is viewable internally and changes may be made. Contract goes out to vendor, status is set to "pending" Contract is rejected by vendor. At this state, nothing can be done to the contract. No statuses may be added to the collection. Contract is accepted by vendor. At this state, nothing can be done to the contract. No statuses may be added to the collection. I obviously want to avoid a situation where the contract is accepted and, say, the amount is changed. Here are my classes: [EnforceNoChangesAfterDraftState] public class VendorContract { public virtual Vendor Vendor { get; set; } public virtual decimal Amount { get; set; } public virtual VendorContact VendorContact { get; set; } public virtual string CreatedBy { get; set; } public virtual DateTime CreatedOn { get; set; } public virtual FileStore Contract { get; set; } public virtual IList<VendorContractStatus> ContractStatus { get; set; } } [EnforceCorrectWorkflow] public class VendorContractStatus { public virtual VendorContract VendorContract { get; set; } public virtual FileStore ExecutedDocument { get; set; } public virtual string Status { get; set; } public virtual string Reason { get; set; } public virtual string CreatedBy { get; set; } public virtual DateTime CreatedOn { get; set; } } I've omitted the filestore class, which is basically a key/value lookup to find the document based on its guid. The VendorContractStatus is mapped as a many-to-one in Nhibernate. I then use a custom validator as described here. If anything but draft is returned in the VendorContractStatus collection, no changes are allowed. Furthermore the VendorContractStatus must follow the correct workflow (you can add a rejected after a pending, but you can't add anything else to the collection if a reject or accepted exists, etc.). All sounds alright? Well a colleague has argued that we should simply add an "IsDraft" bool property to VendorContract and not accept updates if IsDraft is false. Then we should setup a method inside of VendorContractStatus for updating the status, if something gets added after a draft, it sets the IsDraft property of VendorContract to false. I do not like this as it feels like I'm dirtying up the POCOs and adding logic that should persist in the validation area, that no rules should really exist in these classes and they shouldn't be aware of their states. Any thoughts on this and what is the better practice from a DDD perspective? From my view, if in the future we want more complex rules, my way will be more maintainable over the long run. Say we have contracts over a certain amount to be approved by a manager. I would think it would be better to have a one-to-one mapping with a VendorContractApproval class, rather than adding IsApproved properties, but that's just speculation. This might be splitting hairs, but this is the first real gritty enterprise software project we've done. Any advice would be appreciated!

    Read the article

  • Undefined reference to vtable

    - by RyanG
    So, I'm getting the infamously horrible "undefined reference to 'vtable..." error for the following code (The class in question is CGameModule.) and I cannot for the life of me understand what the problem is. At first, I thought it was related to forgetting to give a virtual function a body, but as far as I understand, everything is all here. The inheritance chain is a little long, but here is the related source code. I'm not sure what other information I should provide. My code: class CGameModule : public CDasherModule { public: CGameModule(Dasher::CEventHandler *pEventHandler, CSettingsStore *pSettingsStore, CDasherInterfaceBase *pInterface, ModuleID_t iID, const char *szName) : CDasherModule(pEventHandler, pSettingsStore, iID, 0, szName) { g_pLogger->Log("Inside game module constructor"); m_pInterface = pInterface; } virtual ~CGameModule() {}; std::string GetTypedTarget(); std::string GetUntypedTarget(); bool DecorateView(CDasherView *pView) { //g_pLogger->Log("Decorating the view"); return false; } void SetDasherModel(CDasherModel *pModel) { m_pModel = pModel; } virtual void HandleEvent(Dasher::CEvent *pEvent); private: CDasherNode *pLastTypedNode; CDasherNode *pNextTargetNode; std::string m_sTargetString; size_t m_stCurrentStringPos; CDasherModel *m_pModel; CDasherInterfaceBase *m_pInterface; }; } Inherits from... class CDasherModule; typedef std::vector<CDasherModule*>::size_type ModuleID_t; /// \ingroup Core /// @{ class CDasherModule : public Dasher::CDasherComponent { public: CDasherModule(Dasher::CEventHandler * pEventHandler, CSettingsStore * pSettingsStore, ModuleID_t iID, int iType, const char *szName); virtual ModuleID_t GetID(); virtual void SetID(ModuleID_t); virtual int GetType(); virtual const char *GetName(); virtual bool GetSettings(SModuleSettings **pSettings, int *iCount) { return false; }; private: ModuleID_t m_iID; int m_iType; const char *m_szName; }; Which inherits from.... namespace Dasher { class CEvent; class CEventHandler; class CDasherComponent; }; /// \ingroup Core /// @{ class Dasher::CDasherComponent { public: CDasherComponent(Dasher::CEventHandler* pEventHandler, CSettingsStore* pSettingsStore); virtual ~CDasherComponent(); void InsertEvent(Dasher::CEvent * pEvent); virtual void HandleEvent(Dasher::CEvent * pEvent) {}; bool GetBoolParameter(int iParameter) const; void SetBoolParameter(int iParameter, bool bValue) const; long GetLongParameter(int iParameter) const; void SetLongParameter(int iParameter, long lValue) const; std::string GetStringParameter(int iParameter) const; void SetStringParameter(int iParameter, const std::string & sValue) const; ParameterType GetParameterType(int iParameter) const; std::string GetParameterName(int iParameter) const; protected: Dasher::CEventHandler *m_pEventHandler; CSettingsStore *m_pSettingsStore; }; /// @} #endif

    Read the article

  • Help to argue why to develop software on a physical computer rather than via a remote desktop

    - by s5804
    Remote desktops are great and many times a blessing and cost effective (instead of leasing expensive cables). I am not arguing against remote desktops, just if one have the alternative to use either remote desktop or physical computer, I would choose the later. Also note that I am not arguing for or against remote work practices. But in my case I am required to be physically present in the office when developing software. Background, I work in a company which main business is not to develop software. Therefore the company IT policies are mainly focused on security and to efficiently deploying/maintaing thousands of computer to users. Further, the typical employee runs typical Office applications, like a word processors. Because safety/stability is such a big priority, every non production system/application, shall be deployed into a physical different network, called the test network. Software development of course also belongs in the test network. To access the test network the company has created a standard policy, which dictates that access to the test network shall go only via a remote desktop client. Practically from ones production computer one would open up a remote desktop client to a virtual computer located in the test network. On the virtual computer's remote desktop one would be able to access/run/install all development tools, like Eclipse IDE. Another solution would be to have a dedicated physical computer, which is physically only connected to the test network. Both solutions are available in the company. I have tested both approaches and found running Eclipse IDE, SQL developer, in the remote desktop client to be sluggish (keyboard strokes are delayed), commands like alt-tab takes me out of the remote client, enjoying... Further, screen resolution and colors are different, just to mention a few. Therefore there is nothing technical wrong with the remote client, just not optimal and frankly de-motivating. Now with the new policies put in place, plans are to remove the physical computers connected to the test network. I am looking for help to argue for why software developers shall have a dedicated physical software development computer, to be productive and cost effective. Remember that we are physically in office. Further one can notice that we are talking about approx. 50 computers out of 2000 employees. Therefore the extra budget is relatively small. This is more about policy than cost. Please note that there are lots of similar setups in other companies that work great due to a perfectly tuned systems. However, in my case it is sluggish and it would cost more money to trouble shoot the performance and fine tune it rather than to have a few physical computers. As a business case we have argued that productivity will go down by 25%, however it's my feeling that the reality is probably closer to 50%. This business case isn't really accepted and I find it very difficult to defend it to managers that has never ever used a rich IDE in their life, never mind developed software. Further the test network and remote client has no guaranteed service level, therefore it is down for a few hours per month with the lowest priority on the fix list. Help is appreciated.

    Read the article

  • Most secure way to access my home Linux server while I am on the road? Specialized solution wanted

    - by Ace Paus
    I think many people may be in my situation. I travel on business with a laptop. And I need secure access to files from the office (which in my case is my home). The short version of my question: How can I make SSH/SFTP really secure when only one person needs to connect to the server from one laptop? In this situation, what special steps would make it almost impossible for anyone else to get online access to the server? A lot more details: I use Ubuntu Linux on both my laptop (KDE) and my home/office server. Connectivity is not a problem. I can tether to my phone's connection if needed. I need access to a large number of files (around 300 GB). I don't need all of them at once, but I don't know in advance which files I might need. These files contain confidential client info and personal info such as credit card numbers, so they must be secure. Given this, I don't want store all these files on Dropbox or Amazon AWS, or similar. I couldn't justify that cost anyway (Dropbox don't even publish prices for plans above 100 GB, and security is a concern). However, I am willing to spend some money on a proper solution. A VPN service, for example, might be part of the solution? Or other commercial services? I've heard about PogoPlug, but I don't know if there is a similar service that might address my security concerns? I could copy all my files to my laptop because it has the space. But then I have to sync between my home computer and my laptop and I found in the past that I'm not very good about doing this. And if my laptop is lost or stolen, my data would be on it. The laptop drive is an SSD and encryption solutions for SSD drives are not good. Therefore, it seems best to keep all my data on my Linux file server (which is safe at home). Is that a reasonable conclusion, or is anything connected to the Internet such a risk that I should just copy the data to the laptop (and maybe replace the SSD with an HDD, which reduces battery life and performance)? I view the risks of losing a laptop to be higher. I am not an obvious hacking target online. My home broadband is cable Internet, and it seems very reliable. So I want to know the best (reasonable) way to securely access my data (from my laptop) while on the road. I only need to access it from this one computer, although I may connect from either my phone's 3G/4G or via WiFi or some client's broadband, etc. So I won't know in advance which IP address I'll have. I am leaning toward a solution based on SSH and SFTP (or similar). SSH/SFTP would provided about all the functionality I anticipate needing. I would like to use SFTP and Dolphin to browse and download files. I'll use SSH and the terminal for anything else. My Linux file server is set up with OpenSSH. I think I have SSH relatively secured. I'm using Denyhosts too. But I want to go several steps further. I want to get the chances that anyone can get into my server as close to zero as possible while still allowing me to get access from the road. I'm not a sysadmin or programmer or real "superuser". I have to spend most of my time doing other things. I've heard about "port knocking" but I have never used it and I don't know how to implement it (although I'm willing to learn). I have already read a number of articles with titles such as: Top 20 OpenSSH Server Best Security Practices 20 Linux Server Hardening Security Tips Debian Linux Stop SSH User Hacking / Cracking Attacks with DenyHosts Software more... I have not implemented every single thing I've read about. I probably can't do that. But maybe there is something even better I can do in my situation because I only need access from a single laptop. I'm just one user. My server does not need to be accessible to the general public. Given all these facts, I'm hoping I can get some suggestions here that are within my capability to implement and that leverage these facts to create a great deal better security than general purpose suggestions in the articles above.

    Read the article

  • PHP setting cookies in a child class

    - by steve
    I am writing a custom session handler and for the life of me I cannot get a cookie to set in it. I'm not outputting anything to the browser before I set the cookie but it still doesn't work. Its killing me. The cookie will set if I set it in the script I define and call on the session handler with. If necessary I will post code. Any ideas people? <?php /* require the needed classes comment out what is not needed */ require_once("classes/sessionmanager.php"); require_once("classes/template.php"); require_once("classes/database.php"); $title=" "; //titlebar of the web browser $description=" "; $keywords=" "; //meta keywords $menutype="default"; //default or customer, customer is elevated $pagetitle="dflsfsf "; //title of the webpage $pagebody=" "; //body of the webpage $template=template::def_instance(); $database=database::def_instance(); $session=sessionmanager::def_instance(); $session->sessions(); session_start(); ?> and this is the one that actually sets the cookie for the session function write($session_id,$session_data) { $session_id = mysql_real_escape_string($session_id); $session_data = mysql_real_escape_string(serialize($session_data)); $expires = time() + 3600; $user_ip = $_SERVER['REMOTE_ADDR']; $bol = FALSE; $time = time(); $newsession = FALSE; $auth = FALSE; $query = "SELECT * FROM 'sessions' WHERE 'expires' > '$time'"; $sessions_result = $this->query($query); $newsession = $this->newsession_check($session_id,$sessions_result); while($sessions_array = mysql_fetch_array($sessions_result) AND $auth = FALSE) { $session_array = $this->strip($session_array); $auth = $this->auth_check($session_array,$session_id); } /* this is an authentic session. build queries and update it */ if($auth = TRUE AND $newsession = FALSE) { $session_data = mysql_real_escape_string($session_data); $update_query1 = "UPDATE 'sessions' SET 'user_ip' = '$user_ip' WHERE 'session_id' = '$session_id'"; $update_query2 = "UPDATE 'sessions' SET 'data' = '$session_data' WHERE 'session_id = '$session_id'"; $update_query3 = "UPDATE 'sessions' SET 'expires' = '$expires' WHERE 'session_id' = '$session_id'"; $this->query($update_query1); $this->query($update_query2); $this->query($update_query3); $bol = TRUE; } elseif($newsession = TRUE) { /* this is a new session, build and create it */ $random_number = $this->obtain_random(); $cookieval = hash("sha512",$random_number); setcookie("rndn",$cookieval); $query = "INSERT INTO sessions VALUES('$session_id','0','$user_ip','$random_number','$session_data','$expires')"; $this->query($query); //echo $cookieval."this is the cookie <<"; $bol = TRUE; } return $bol; }

    Read the article

  • XForms and multiple inputs for same model tag

    - by iHeartGreek
    Hi! I apologize ahead of time if I am not asking this properly.. it is hard to put into words what I am asking.. I have XForms model such as: <file> <criteria> <criterion></criterion> </criteria> </file> I want to have multiple input text boxes that create a new criterion tag. user interface such as: <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> <xf:input ref="/file/criteria/criterion" model="select_data"> <xf:label>Select</xf:label> </xf:input> And I would like the XML output to look like this (once user has entered in info): <file> <criteria> <criterion>AAA</criterion> <criterion>BBB</criterion> <criterion>CCC</criterion> </criteria> </file> The way I have it doesn't work, as it sees the 3 input fields to be referring all to the same criterion tag. How do I differentiate? Thanks! I hope that made some sense! BEGIN FIRST EDIT Thanks for the responses for the basic text box! However, I now need to do this with a listbox. But for the life of me, I can't figure out how. I read somewhere to use with the xforms:select and deselect events.. but I didn't know where to place them, and the places I tried gave me very weird behaviour. I am currently implementing the following: <xf:select ref="instance('criteria_data')/criteria/criterion" selection="" appearance="compact" > <xf:label>Choose criteria</xf:label> <xf:itemset nodeset="instance('criteria_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> However when multiple choices are submitted, all selection values are inserted into the same node, separated by spaces. For example: If AAA and BBB and FFF were selected from listbox, it would result in the following XML: <criterion>AAA BBB FFF</criterion> How do I change my code to have each selection be in a separate node? i.e. I want it to look like this: <criterion>AAA</criterion> <criterion>BBB</criterion> <criterion>FFF</criterion> Thanks! END FIRST EDIT BEGIN SECOND EDIT: For the listboxes (ie xf:select appearance="compact") I ended up allowing the spaces to occur in the same node and then just transformed that xml using xsl to generate a properly formatted new xml doc (with separate individual nodes). Unfortunately, I did not find a less cumbersome solution by inserting them originally into separate nodes. The selected answer works very well for text boxes however, hence why I selected it as the answer. END SECOND EDIT

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Using LINQ Distinct: With an Example on ASP.NET MVC SelectListItem

    - by Joe Mayo
    One of the things that might be surprising in the LINQ Distinct standard query operator is that it doesn’t automatically work properly on custom classes. There are reasons for this, which I’ll explain shortly. The example I’ll use in this post focuses on pulling a unique list of names to load into a drop-down list. I’ll explain the sample application, show you typical first shot at Distinct, explain why it won’t work as you expect, and then demonstrate a solution to make Distinct work with any custom class. The technologies I’m using are  LINQ to Twitter, LINQ to Objects, Telerik Extensions for ASP.NET MVC, ASP.NET MVC 2, and Visual Studio 2010. The function of the example program is to show a list of people that I follow.  In Twitter API vernacular, these people are called “Friends”; though I’ve never met most of them in real life. This is part of the ubiquitous language of social networking, and Twitter in particular, so you’ll see my objects named accordingly. Where Distinct comes into play is because I want to have a drop-down list with the names of the friends appearing in the list. Some friends are quite verbose, which means I can’t just extract names from each tweet and populate the drop-down; otherwise, I would end up with many duplicate names. Therefore, Distinct is the appropriate operator to eliminate the extra entries from my friends who tend to be enthusiastic tweeters. The sample doesn’t do anything with the drop-down list and I leave that up to imagination for what it’s practical purpose could be; perhaps a filter for the list if I only want to see a certain person’s tweets or maybe a quick list that I plan to combine with a TextBox and Button to reply to a friend. When the program runs, you’ll need to authenticate with Twitter, because I’m using OAuth (DotNetOpenAuth), for authentication, and then you’ll see the drop-down list of names above the grid with the most recent tweets from friends. Here’s what the application looks like when it runs: As you can see, there is a drop-down list above the grid. The drop-down list is where most of the focus of this article will be. There is some description of the code before we talk about the Distinct operator, but we’ll get there soon. This is an ASP.NET MVC2 application, written with VS 2010. Here’s the View that produces this screen: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TwitterFriendsViewModel>" %> <%@ Import Namespace="DistinctSelectList.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">     Home Page </asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">     <fieldset>         <legend>Twitter Friends</legend>         <div>             <%= Html.DropDownListFor(                     twendVM => twendVM.FriendNames,                     Model.FriendNames,                     "<All Friends>") %>         </div>         <div>             <% Html.Telerik().Grid<TweetViewModel>(Model.Tweets)                    .Name("TwitterFriendsGrid")                    .Columns(cols =>                     {                         cols.Template(col =>                             { %>                                 <img src="<%= col.ImageUrl %>"                                      alt="<%= col.ScreenName %>" />                         <% });                         cols.Bound(col => col.ScreenName);                         cols.Bound(col => col.Tweet);                     })                    .Render(); %>         </div>     </fieldset> </asp:Content> As shown above, the Grid is from Telerik’s Extensions for ASP.NET MVC. The first column is a template that renders the user’s Avatar from a URL provided by the Twitter query. Both the Grid and DropDownListFor display properties that are collections from a TwitterFriendsViewModel class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { /// /// For finding friend info on screen /// public class TwitterFriendsViewModel { /// /// Display names of friends in drop-down list /// public List FriendNames { get; set; } /// /// Display tweets in grid /// public List Tweets { get; set; } } } I created the TwitterFreindsViewModel. The two Lists are what the View consumes to populate the DropDownListFor and Grid. Notice that FriendNames is a List of SelectListItem, which is an MVC class. Another custom class I created is the TweetViewModel (the type of the Tweets List), shown below: namespace DistinctSelectList.Models { /// /// Info on friend tweets /// public class TweetViewModel { /// /// User's avatar /// public string ImageUrl { get; set; } /// /// User's Twitter name /// public string ScreenName { get; set; } /// /// Text containing user's tweet /// public string Tweet { get; set; } } } The initial Twitter query returns much more information than we need for our purposes and this a special class for displaying info in the View.  Now you know about the View and how it’s constructed. Let’s look at the controller next. The controller for this demo performs authentication, data retrieval, data manipulation, and view selection. I’ll skip the description of the authentication because it’s a normal part of using OAuth with LINQ to Twitter. Instead, we’ll drill down and focus on the Distinct operator. However, I’ll show you the entire controller, below,  so that you can see how it all fits together: using System.Linq; using System.Web.Mvc; using DistinctSelectList.Models; using LinqToTwitter; namespace DistinctSelectList.Controllers { [HandleError] public class HomeController : Controller { private MvcOAuthAuthorization auth; private TwitterContext twitterCtx; /// /// Display a list of friends current tweets /// /// public ActionResult Index() { auth = new MvcOAuthAuthorization(InMemoryTokenManager.Instance, InMemoryTokenManager.AccessToken); string accessToken = auth.CompleteAuthorize(); if (accessToken != null) { InMemoryTokenManager.AccessToken = accessToken; } if (auth.CachedCredentialsAvailable) { auth.SignOn(); } else { return auth.BeginAuthorize(); } twitterCtx = new TwitterContext(auth); var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); var twendsVM = new TwitterFriendsViewModel { Tweets = friendTweets, FriendNames = friendNames }; return View(twendsVM); } public ActionResult About() { return View(); } } } The important part of the listing above are the LINQ to Twitter queries for friendTweets and friendNames. Both of these results are used in the subsequent population of the twendsVM instance that is passed to the view. Let’s dissect these two statements for clarification and focus on what is happening with Distinct. The query for friendTweets gets a list of the 20 most recent tweets (as specified by the Twitter API for friend queries) and performs a projection into the custom TweetViewModel class, repeated below for your convenience: var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); The LINQ to Twitter query above simplifies what we need to work with in the View and the reduces the amount of information we have to look at in subsequent queries. Given the friendTweets above, the next query performs another projection into an MVC SelectListItem, which is required for binding to the DropDownList.  This brings us to the focus of this blog post, writing a correct query that uses the Distinct operator. The query below uses LINQ to Objects, querying the friendTweets collection to get friendNames: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); The above implementation of Distinct seems normal, but it is deceptively incorrect. After running the query above, by executing the application, you’ll notice that the drop-down list contains many duplicates.  This will send you back to the code scratching your head, but there’s a reason why this happens. To understand the problem, we must examine how Distinct works in LINQ to Objects. Distinct has two overloads: one without parameters, as shown above, and another that takes a parameter of type IEqualityComparer<T>.  In the case above, no parameters, Distinct will call EqualityComparer<T>.Default behind the scenes to make comparisons as it iterates through the list. You don’t have problems with the built-in types, such as string, int, DateTime, etc, because they all implement IEquatable<T>. However, many .NET Framework classes, such as SelectListItem, don’t implement IEquatable<T>. So, what happens is that EqualityComparer<T>.Default results in a call to Object.Equals, which performs reference equality on reference type objects.  You don’t have this problem with value types because the default implementation of Object.Equals is bitwise equality. However, most of your projections that use Distinct are on classes, just like the SelectListItem used in this demo application. So, the reason why Distinct didn’t produce the results we wanted was because we used a type that doesn’t define its own equality and Distinct used the default reference equality. This resulted in all objects being included in the results because they are all separate instances in memory with unique references. As you might have guessed, the solution to the problem is to use the second overload of Distinct that accepts an IEqualityComparer<T> instance. If you were projecting into your own custom type, you could make that type implement IEqualityComparer<T>, but SelectListItem belongs to the .NET Framework Class Library.  Therefore, the solution is to create a custom type to implement IEqualityComparer<T>, as in the SelectListItemComparer class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { public class SelectListItemComparer : EqualityComparer { public override bool Equals(SelectListItem x, SelectListItem y) { return x.Value.Equals(y.Value); } public override int GetHashCode(SelectListItem obj) { return obj.Value.GetHashCode(); } } } The SelectListItemComparer class above doesn’t implement IEqualityComparer<SelectListItem>, but rather derives from EqualityComparer<SelectListItem>. Microsoft recommends this approach for consistency with the behavior of generic collection classes. However, if your custom type already derives from a base class, go ahead and implement IEqualityComparer<T>, which will still work. EqualityComparer is an abstract class, that implements IEqualityComparer<T> with Equals and GetHashCode abstract methods. For the purposes of this application, the SelectListItem.Value property is sufficient to determine if two items are equal.   Since SelectListItem.Value is type string, the code delegates equality to the string class. The code also delegates the GetHashCode operation to the string class.You might have other criteria in your own object and would need to define what it means for your object to be equal. Now that we have an IEqualityComparer<SelectListItem>, let’s fix the problem. The code below modifies the query where we want distinct values: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct(new SelectListItemComparer()) .ToList(); Notice how the code above passes a new instance of SelectListItemComparer as the parameter to the Distinct operator. Now, when you run the application, the drop-down list will behave as you expect, showing only a unique set of names. In addition to Distinct, other LINQ Standard Query Operators have overloads that accept IEqualityComparer<T>’s, You can use the same techniques as shown here, with SelectListItemComparer, with those other operators as well. Now you know how to resolve problems with getting Distinct to work properly and also have a way to fix problems with other operators that require equality comparisons. @JoeMayo

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system.The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me.At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach.Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements.My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work.Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way.DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal.Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info:You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client.You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features.I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically.When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions:On the Choose the installation you want page of the installation wizard, I chose Server Farm.On the Server Type page, I chose Complete.At the end of the installation, I did not run the configuration wizard.Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end.It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective.I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed:Install SQL Server 2008 R2 to get a database engine instance installed.Run the SharePoint configuration wizard to set up the SharePoint databases.In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization.Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment.I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon.Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment.I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >