Search Results

Search found 3565 results on 143 pages for 'in house distribution'.

Page 131/143 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • BUILDROOT files during RPM generation

    - by khmarbaise
    Currently i have the following spec file to create a RPM. The spec file is generated by maven plugin to produce a RPM out of it. The question is: will i find files which are mentioned in the spec file after the rpm generation inside the BUILDROOT/SPECS/SOURCES/SRPMS structure? %define _unpackaged_files_terminate_build 0 Name: rpm-1 Version: 1.0 Release: 1 Summary: rpm-1 License: 2009 my org Distribution: My App Vendor: my org URL: www.my.org Group: Application/Collectors Packager: my org Provides: project Requires: /bin/sh Requires: jre >= 1.5 Requires: BASE_PACKAGE PreReq: dependency Obsoletes: project autoprov: yes autoreq: yes BuildRoot: /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/buildroot %description %install if [ -e $RPM_BUILD_ROOT ]; then mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot/* $RPM_BUILD_ROOT else mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot $RPM_BUILD_ROOT fi ln -s /usr/myusr/app $RPM_BUILD_ROOT/usr/myusr/app2 ln -s /tmp/myapp/somefile $RPM_BUILD_ROOT/tmp/myapp/somefile2 ln -s name.sh $RPM_BUILD_ROOT/usr/myusr/app/bin/oldname.sh %files %defattr(-,myuser,mygroup,-) %dir "/usr/myusr/app" "/usr/myusr/app2" "/tmp/myapp/somefile" "/tmp/myapp/somefile2" "/usr/myusr/app/lib" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/start.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter-version.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name-Linux.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/oldname.sh" %dir "/usr/myusr/app/conf" %config "/usr/myusr/app/conf/log4j.xml" "/usr/myusr/app/conf/log4j.xml.deliver" %prep echo "hello from prepare" %pre -p /bin/sh #!/bin/sh if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi %post #!/bin/sh #create soft link script to services directory ln -s /usr/myusr/app/bin/start.sh /etc/init.d/myapp chmod 555 /etc/init.d/myapp %preun #!/bin/sh #the argument being passed in indicates how many versions will exist #during an upgrade, this value will be 1, in which case we do not want to stop #the service since the new version will be running once this script is called #during an uninstall, the value will be 0, in which case we do want to stop #the service and remove the /etc/init.d script. if [ "$1" = "0" ] then if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi fi; %triggerin -- dependency, dependency1 echo "hello from install" %changelog * Tue May 23 2000 Vincent Danen <[email protected]> 0.27.2-2mdk -update BuildPreReq to include rep-gtk and rep-gtkgnome * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.2-1mdk -0.27.2 * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.1-2mdk -added BuildPreReq -change name from Sawmill to Sawfish The problem i found is that the files (filter.txt in particular) after the generation process on a Ubuntu system but not on SuSE system. Which might be caused by different rpm versions ? Currently we have an integration test which fails based on the non existing of the file (filter.txt under a buildroot folder?)

    Read the article

  • Experienced developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Link aggregation with freebsd8 and a cicso 3550, what am i doing wrong?

    - by Flamewires
    Hey, I am trying to setup Link Aggrigation with LACP (well, anything that provides increased bandwidth and failover using my setup will work). I'm running FreeBSD 8.0 on 3 machines. M1 is running 2 10/100 ethernetcards setup for link aggrigation using lagg. for reference: ifconfig em0 up ifconfig tx0 up ifconfig create lagg0 ifconfig lagg0 laggproto lacp laggport tx0 laggport em0 192.168.1.16 netmask 255.255.255.0 I plugged them into ports 1 and 2 of a Cicso 3550. then ran: configure terminal interface range Fa0/1 - 2 switchport mode access switchport access vlan 1 channel-group 1 mode active (everythings in vlan 1) Now Im able to connect the other computers to other ports on the switch and failover works great, i can unplug cables in the middle of a transfer and the traffic gets rerouted. However, im not noticing any speed increase. My test setup: load balancing: i tried dst and src on the switch, neither seemed to give me a speed increase. I am SCPing 2 500 meg files from the lagg computer to other computers (one each) which are also running 10/100 full duplex cards. I get transfer speeds of about 11.2-11.4 Mbps to a single host, and about half that (5.9-6.2) Mbps when transferring to both at the same time. From what I understood with destination load balancing the router was suppose to balance traffic headed for 1 computer over 1 port and traffic headed for another over a diff(in this case) the other port. With destination-MAC address forwarding, when packets are forwarded to an EtherChannel, the packets are distributed across the ports in the channel based on the destination host MAC address of the incoming packet. Therefore, packets to the same destination are forwarded over the same port, and packets to a different destination are sent on a different port in the channel. For the 3550 series switch, when source-MAC address forwarding is used, load distribution based on the source and destination IP address is also enabled for routed IP traffic. All routed IP traffic chooses a port based on the source and destination IP address. Packets between two IP hosts always use the same port in the channel, and traffic between any other pair of hosts can use a different port in the channel. (Link) What am i doing wrong/what would i need to do to see a speed increase beyond what i could do with just a single card?

    Read the article

  • Complete Active Directory redesign and GPO application

    - by Wolfgang Kuehne
    after much testing and hundreds of tries and hours invested I decided to consult you experts here. Overview: I want to apply some GPO to our users which will add some specific site to the Trusted Sites in Internet Explorer settings for all users. However, the more I try the more confusing the results become. The GPO is either applied to one group of users, or to another one. Finally, I came to the conclusion that this weird behavior is cause rather by the poor organization in Users and Groups in Active Directory. As such I want to kick the problem from the root: Redesign the Active Directory Users and Groups. Scenario: There is one Domain Controller, and we use Terminal Services (so there is a Terminal Server as well). Users usually log on to the Terminal Server using Remote Desktop to perform their daily tasks. I would classify the users in the following way: IT: Admins, Software Development Business: Administration, Management The current structure of the Active Directory Users and Groups is a result of the previous IT management. The company has used Small Business Server which has created multiple default user groups and containers. Unfortunately, the guys working before me have do no documentation at all. Now, as I inherit this structure I am in the no mans land. No idea which direction to head first. As you can see, the Active Directory User and Groups have become a bit confusing. There is no SBS anymore, but when migrating from SBS to the current Windows Server 2008 R2 environment the guys before me have simply copied the same structure. The real question: Where should I start cleaning from, ensuring that I won't break totally the current infrastructure? What is a nice organization for the scenario that I have explained above? Possible useful info for the current structure: Computers folder contains Terminal Services Computers user group Members: TerminalServer computer located at Server -> Terminalserver OU Member of: NONE Foreign Security Principals : EMPTY Managed Service Accounts : EMPTY Microsoft Exchange Security Groups : not sure if needed, our emails are administered by external service provider Distribution Groups : not sure if needed Security Groups : there are couple of groups which are needed SBS users : contains all the users Terminalserver : contains only the TerminalServer machine

    Read the article

  • WCF Duplex Interaction with Web Server

    - by Mark Struzinski
    Here is my scenario, and it is causing us a considerable amount of grief at the moment: We have a vendor web service which provides base level telephony functionality. This service has a SOAP api, which we are leveraging to build up a custom UI that is integrated into our in house web apps. The api functions on 2 levels. You make standard client calls into the service to initiate actions, such as Login, Place Call, Hang Up, etc. On a different thread, the service sends events back to the client to alert the user of things that are occurring on the system (agent successfully logged in, call was disconnected, etc). I implemented a WCF service to sit between the web server and the vendor service. This WCF service operates in duplex mode, establishing a 2 way connection with the web server. The web server makes outbound calls to the WCF service, which routes them to the vendor's web service. Events are received back to the WCF service, which passes them onto the web server via a callback channel on the WCF client. As events are received on the web server, they are placed into a hash table with the user's name as the key, and a .NET queue as the value to hold the event. Each event is enqueued to the agent who owns it. On a 2 second interval, the web page polls the web server via an ajax request to get new events for the logged in user. It hits the hash table for the user key, dequeues any events that are present, and serializes them back up to the web page. From there, they are processed in order and appropriate messages are displayed to the user. This implementation performs well in a single user scenario. The second I put more than 1 user on the system, I start getting frequent timeouts with the following CommunicationException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond We are running Windows Server 2008 R2 both servers. Both the web app and WCF service are running on .NET 3.5. The WCF service is running under the net.tcp protocol in duplex mode. The web app is ASP.NET MVC 2. Has anyone dealt with anything like this scenario? Is there a more efficient way (or a widely accepted pattern) to implement this?

    Read the article

  • How can I make Swig correctly wrap a char* buffer that is modified in C as a Java Something-or-other

    - by Ukko
    I am trying to wrap some legacy code for use in Java and I was quite happy to see that Swig was able to handle the header file and it generate a great wrapper that almost works. Now I am looking for the deep magic that will make it really work. In C I have a function that looks like this DLL_IMPORT int DustyVoodoo(char *buff, int len, char *curse); This integer returned by this function is an error code in case it fails. The arguments are buff is a character buffer len is the length of the data in the buffer curse the another character buffer that contains the result of calling DustyVoodoo So, you can see where this is going, the result is actually coming back via the third argument. Also len is confusing since it may be the length of both buffers, they are always allocated as being the same size in calling code but given what DustyVoodoo does I don't think that they need be the same. To be safe both buffers should be the same size in practice, say 512 chars. The C code generated for the binding is as follows: SWIGEXPORT jint JNICALL Java_pemapiJNI_DustyVoodoo(JNIEnv *jenv, jclass jcls, jstring jarg1, jint jarg2, jstring jarg3) { jint jresult = 0 ; char *arg1 = (char *) 0 ; int arg2 ; char *arg3 = (char *) 0 ; int result; (void)jenv; (void)jcls; arg1 = 0; if (jarg1) { arg1 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg1, 0); if (!arg1) return 0; } arg2 = (int)jarg2; arg3 = 0; if (jarg3) { arg3 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg3, 0); if (!arg3) return 0; } result = (int)PemnEncrypt(arg1,arg2,arg3); jresult = (jint)result; if (arg1) (*jenv)->ReleaseStringUTFChars(jenv, jarg1, (const char *)arg1); if (arg3) (*jenv)->ReleaseStringUTFChars(jenv, jarg3, (const char *)arg3); return jresult; } It is correct for what it does; however, it misses the fact that cursed is not just an input, it is altered by the function and should be returned as an output. It also does not know that the java Strings are really buffers and should be backed by a suitably sized array. I think that Swig can do the right thing here, I just can't figure out from the documentation how to tell Swig what it needs to know. Any typemap masers in the house?

    Read the article

  • Trying to install Rmagick on Debian

    - by Janak
    One of things you need to do to get Rmagick installed apt-get install libmagick9-dev When I try that I get the following errors Reading package lists... Done Building dependency tree... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: libmagick9-dev: Depends: libjpeg62-dev but it is not going to be installed Depends: libbz2-dev but it is not going to be installed Depends: libtiff4-dev but it is not going to be installed Depends: libwmf-dev (>= 0.2.7-1) but it is not going to be installed Depends: libz-dev Depends: libpng12-dev but it is not going to be installed Depends: libfreetype6-dev but it is not going to be installed Depends: libexif-dev but it is not going to be installed Depends: libdjvulibre-dev but it is not going to be installed Depends: librsvg2-dev but it is not going to be installed Depends: libgraphviz-dev but it is not going to be installed E: Broken packages I don't know what to do to fix this? EDIT 1: I tried sudo apt-get install librmagick-ruby and it worked fine but then I needed to install the fleximage gem gem1.8 install fleximage and I got the following error message Building native extensions. This could take a while... ERROR: Error installing fleximage: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for Ruby version >= 1.8.5... yes checking for cc... yes checking for Magick-config... no Can't install RMagick 2.12.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/gems/1.8/bin *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/rmagick-2.12.2/ext/RMagick/gem_make.out

    Read the article

  • php split array into smaller even arrays

    - by SoulieBaby
    I have a function that is supposed to split my array into smaller, evenly distributed arrays, however it seems to be duplicating my data along the way. If anyone can help me out that'd be great. Here's the original array: Array ( [0] => stdClass Object ( [bid] => 42 [name] => Ray White Mordialloc [imageurl] => sp_raywhite.gif [clickurl] => http://www.raywhite.com/ ) [1] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [2] => stdClass Object ( [bid] => 53 [name] => Carmotive [imageurl] => sp_carmotive.jpg [clickurl] => http://www.carmotive.com.au/ ) [3] => stdClass Object ( [bid] => 51 [name] => Richmond and Bennison [imageurl] => sp_richmond.jpg [clickurl] => http://www.richbenn.com.au/ ) [4] => stdClass Object ( [bid] => 50 [name] => Letec [imageurl] => sp_letec.jpg [clickurl] => www.letec.biz ) [5] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [6] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) [7] => stdClass Object ( [bid] => 34 [name] => Adrianos Pizza & Pasta [imageurl] => sp_adrian.gif [clickurl] => ) [8] => stdClass Object ( [bid] => 59 [name] => Pure Sport [imageurl] => sp_psport.jpg [clickurl] => http://www.puresport.com.au/ ) [9] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [10] => stdClass Object ( [bid] => 52 [name] => Mordialloc Travel and Cruise [imageurl] => sp_morditravel.jpg [clickurl] => http://www.yellowpages.com.au/vic/mordialloc/mordialloc-travel-cruise-13492525-listing.html ) [11] => stdClass Object ( [bid] => 57 [name] => Southern Suburbs Physiotherapy Centre [imageurl] => sp_sspc.jpg [clickurl] => http://www.sspc.com.au ) [12] => stdClass Object ( [bid] => 54 [name] => PPM Builders [imageurl] => sp_ppm.jpg [clickurl] => http://www.hotfrog.com.au/Companies/P-P-M-Builders ) [13] => stdClass Object ( [bid] => 36 [name] => Big River [imageurl] => sp_bigriver.gif [clickurl] => ) [14] => stdClass Object ( [bid] => 35 [name] => Bendigo Bank Parkdale / Mentone East [imageurl] => sp_bendigo.gif [clickurl] => http://www.bendigobank.com.au ) [15] => stdClass Object ( [bid] => 56 [name] => Logical Services [imageurl] => sp_logical.jpg [clickurl] => ) [16] => stdClass Object ( [bid] => 58 [name] => Dicount Lollie Shop [imageurl] => new dls logo.jpg [clickurl] => ) [17] => stdClass Object ( [bid] => 46 [name] => Patterson Securities [imageurl] => cmyk patersons_withtag.jpg [clickurl] => ) [18] => stdClass Object ( [bid] => 44 [name] => Mordialloc Personal Trainers [imageurl] => sp_mordipt.gif [clickurl] => # ) [19] => stdClass Object ( [bid] => 37 [name] => Mordialloc Cellar Door [imageurl] => sp_cellardoor.gif [clickurl] => ) [20] => stdClass Object ( [bid] => 41 [name] => Print House Graphics [imageurl] => sp_printhouse.gif [clickurl] => ) [21] => stdClass Object ( [bid] => 55 [name] => 360South [imageurl] => sp_360.jpg [clickurl] => ) [22] => stdClass Object ( [bid] => 43 [name] => Systema [imageurl] => sp_systema.gif [clickurl] => ) [23] => stdClass Object ( [bid] => 38 [name] => Lowe Financial Group [imageurl] => sp_lowe.gif [clickurl] => http://lowefinancial.com/ ) [24] => stdClass Object ( [bid] => 49 [name] => Kim Reed Conveyancing [imageurl] => sp_kimreed.jpg [clickurl] => ) [25] => stdClass Object ( [bid] => 45 [name] => Mordialloc Sporting Club [imageurl] => msc logo.jpg [clickurl] => ) ) Here's the php function which is meant to split the array: function split_array($array, $slices) { $perGroup = floor(count($array) / $slices); $Remainder = count($array) % $slices ; $slicesArray = array(); $i = 0; while( $i < $slices ) { $slicesArray[$i] = array_slice($array, $i * $perGroup, $perGroup); $i++; } if ( $i == $slices ) { if ($Remainder > 0 && $Remainder < $slices) { $z = $i * $perGroup +1; $x = 0; while ($x < $Remainder) { $slicesRemainderArray = array_slice($array, $z, $Remainder+$x); $remainderItems = array_merge($slicesArray[$x],$slicesRemainderArray); $slicesArray[$x] = $remainderItems; $x++; $z++; } } }; return $slicesArray; } Here's the result of the split (it somehow duplicates items from the original array into the smaller arrays): Array ( [0] => Array ( [0] => stdClass Object ( [bid] => 57 [name] => Southern Suburbs Physiotherapy Centre [imageurl] => sp_sspc.jpg [clickurl] => http://www.sspc.com.au ) [1] => stdClass Object ( [bid] => 35 [name] => Bendigo Bank Parkdale / Mentone East [imageurl] => sp_bendigo.gif [clickurl] => http://www.bendigobank.com.au ) [2] => stdClass Object ( [bid] => 38 [name] => Lowe Financial Group [imageurl] => sp_lowe.gif [clickurl] => http://lowefinancial.com/ ) [3] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [4] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [5] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [6] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [1] => Array ( [0] => stdClass Object ( [bid] => 44 [name] => Mordialloc Personal Trainers [imageurl] => sp_mordipt.gif [clickurl] => # ) [1] => stdClass Object ( [bid] => 41 [name] => Print House Graphics [imageurl] => sp_printhouse.gif [clickurl] => ) [2] => stdClass Object ( [bid] => 39 [name] => Main Street Mordialloc [imageurl] => main street cafe.jpg [clickurl] => ) [3] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [4] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [5] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [2] => Array ( [0] => stdClass Object ( [bid] => 56 [name] => Logical Services [imageurl] => sp_logical.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 43 [name] => Systema [imageurl] => sp_systema.gif [clickurl] => ) [2] => stdClass Object ( [bid] => 48 [name] => Beachside Osteo [imageurl] => sp_beachside.gif [clickurl] => http://www.beachsideosteo.com.au/ ) [3] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [4] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [3] => Array ( [0] => stdClass Object ( [bid] => 53 [name] => Carmotive [imageurl] => sp_carmotive.jpg [clickurl] => http://www.carmotive.com.au/ ) [1] => stdClass Object ( [bid] => 45 [name] => Mordialloc Sporting Club [imageurl] => msc logo.jpg [clickurl] => ) [2] => stdClass Object ( [bid] => 33 [name] => Two Brothers [imageurl] => sp_2brothers.gif [clickurl] => http://www.2brothers.com.au/ ) [3] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [4] => Array ( [0] => stdClass Object ( [bid] => 59 [name] => Pure Sport [imageurl] => sp_psport.jpg [clickurl] => http://www.puresport.com.au/ ) [1] => stdClass Object ( [bid] => 54 [name] => PPM Builders [imageurl] => sp_ppm.jpg [clickurl] => http://www.hotfrog.com.au/Companies/P-P-M-Builders ) [2] => stdClass Object ( [bid] => 40 [name] => Ripponlea Mitsubishi [imageurl] => sp_mitsubishi.gif [clickurl] => ) ) [5] => Array ( [0] => stdClass Object ( [bid] => 46 [name] => Patterson Securities [imageurl] => cmyk patersons_withtag.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 34 [name] => Adriano's Pizza & Pasta [imageurl] => sp_adrian.gif [clickurl] => # ) ) [6] => Array ( [0] => stdClass Object ( [bid] => 55 [name] => 360South [imageurl] => sp_360.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 37 [name] => Mordialloc Cellar Door [imageurl] => sp_cellardoor.gif [clickurl] => ) ) [7] => Array ( [0] => stdClass Object ( [bid] => 49 [name] => Kim Reed Conveyancing [imageurl] => sp_kimreed.jpg [clickurl] => ) [1] => stdClass Object ( [bid] => 58 [name] => Dicount Lollie Shop [imageurl] => new dls logo.jpg [clickurl] => ) ) [8] => Array ( [0] => stdClass Object ( [bid] => 51 [name] => Richmond and Bennison [imageurl] => sp_richmond.jpg [clickurl] => http://www.richbenn.com.au/ ) [1] => stdClass Object ( [bid] => 52 [name] => Mordialloc Travel and Cruise [imageurl] => sp_morditravel.jpg [clickurl] => http://www.yellowpages.com.au/vic/mordialloc/mordialloc-travel-cruise-13492525-listing.html ) ) [9] => Array ( [0] => stdClass Object ( [bid] => 50 [name] => Letec [imageurl] => sp_letec.jpg [clickurl] => www.letec.biz ) [1] => stdClass Object ( [bid] => 36 [name] => Big River [imageurl] => sp_bigriver.gif [clickurl] => ) ) ) ^^ As you can see there are duplicates from the original array in the newly created smaller arrays. I thought I could remove the duplicates using a multi-dimensional remove duplicate function but that didn't work. I'm guessing my problem is in the array_split function. Any suggestions? :)

    Read the article

  • should I ever put a major version number into a C#/Java namespace?

    - by Andrew Patterson
    I am designing a set of 'service' layer objects (data objects and interface definitions) for a WCF web service (that will be consumed by third party clients i.e. not in-house, so outside my direct control). I know that I am not going to get the interface definition exactly right - and am wanting to prepare for the time when I know that I will have to introduce a breaking set of new data objects. However, the reality of the world I am in is that I will also need to run my first version simultaneously for quite a while. The first version of my service will have URL of http://host/app/v1service.svc and when the times comes by new version will live at http://host/app/v2service.svc However, when it comes to the data objects and interfaces, I am toying with putting the 'major' version of the interface number into the actual namespace of the classes. namespace Company.Product.V1 { [DataContract(Namespace = "company-product-v1")] public class Widget { [DataMember] string widgetName; } public interface IFunction { Widget GetWidgetData(int code); } } When the time comes for a fundamental change to the service, I will introduce some classes like namespace Company.Product.V2 { [DataContract(Namespace = "company-product-v2")] public class Widget { [DataMember] int widgetCode; [DataMember] int widgetExpiry; } public interface IFunction { Widget GetWidgetData(int code); } } The advantages as I see it are that I will be able to have a single set of code serving both interface versions, sharing functionality where possible. This is because I will be able to reference both interface versions as a distinct set of C# objects. Similarly, clients may use both interface versions simultaneously, perhaps using V1.Widget in some legacy code whilst new bits move on to V2.Widget. Can anyone tell why this is a stupid idea? I have a nagging feeling that this is a bit smelly.. notes: I am obviously not proposing every single new version of the service would be in a new namespace. Presumably I will do as many non-breaking interface changes as possible, but I know that I will hit a point where all the data modelling will probably need a significant rewrite. I understand assembly versioning etc but I think this question is tangential to that type of versioning. But I could be wrong.

    Read the article

  • Programming a callback function within a jQuery plugin

    - by ILMV
    I'm writing a jQuery plug-in so I can reuse this code in many places as it is a very well used piece of code, the code itself adds a new line to a table which has been cloned from a hidden row, it continues to perform a load of manipulations on the new row. I'm currently referencing it like this: $(".abc .grid").grid(); But I want to include a callback so each area the plug-in is called from can do something a bit more unique when the row has been added. I've used the jQuery AJAX plug-in before, so have used the success callback function, but cannot understand how the code works in the background. Here's what I want to achieve: $(".abc .grid").grid({ row_added: function() { // do something a bit more specific here } }); Here's my plug-in code (function($){ $.fn.extend({ //pass the options variable to the function grid: function() { return this.each(function() { grid_table=$(this).find('.grid-table > tbody'); grid_hidden_row=$(this).find('.grid-hidden-row'); //console.debug(grid_hidden_row); $(this).find('.grid-add-row').click(function(event) { /* * clone row takes a hidden dummy row, clones it and appends a unique row * identifier to the id. Clone maintains our jQuery binds */ // get the last id last_row=$(grid_table).find('tr:last').attr('id'); if(last_row===undefined) { new_row=1; } else { new_row=parseInt(last_row.replace('row',''),10)+1; } // append element to target, changes it's id and shows it $(grid_table).append($(grid_hidden_row).clone(true).attr('id','row'+new_row).removeClass('grid-hidden-row').show()); // append unique row identifier on id and name attribute of seledct, input and a $('#row'+new_row).find('select, input, a').each(function(id) { $(this).appendAttr('id','_row'+new_row); $(this).replaceAttr('name','_REPLACE_',new_row); }); // disable all the readonly_if_lines options if this is the first row if(new_row==1) { $('.readonly_if_lines :not(:selected)').attr('disabled','disabled'); } }); $(this).find('.grid-remove-row').click(function(event) { /* * Remove row does what it says on the tin, as well as a few other house * keeping bits and pieces */ // remove the parent tr $(this).parents('tr').remove(); // recalculate the order value5 //calcTotal('.net_value ','#gridform','#gridform_total'); // if we've removed the last row remove readonly locks row_count=grid_table.find('tr').size(); console.info(row_count); if(row_count===0) { $('.readonly_if_lines :disabled').removeAttr('disabled'); } }); }); } }); })(jQuery); I've done the usually searching on elgooG... but I seem to be getting a lot of noise with little result, any help would be greatly appreciated. Thanks!

    Read the article

  • Why is the server performance so poor? What can be done to improve the speed of the server?

    - by fslsyed
    Very slow processing using Windows Server2008 R2 Standard with Service Pack One. Situation: Read a text file using the text data to populate a series of MS Sql tables. The converted data is used to generate monthly PDF invoice files; the PDF files are saved directly to the hard drive. The application is multi-threading with one thread used for the text conversion and three threads for PDF invoice generation. The text conversion is occurring concurrently with the invoice generation. Application Software: C# using Microsoft Visual Studio 2010 Ultimate. Crystal Report Writer 2011 with runtime 13_0_3 64 bit version. Targeted platform is x64; also tested as x86, and Any CPU with similar results. Microsoft .NET Framework 4.0. Microsoft Sql 2008 Issue: The software is running very slowly. The conversion of the text file is approximately six hundred fifty records per second and generation of the PDF files is approximately twelve invoices per minute. The text file to be converted is six hundred Meg with seven thousand invoices to be generated. The software was installed on three different machines from the same distribution files. The same text file was converted on each machine. The user executing the application was an administrator on each machine. The only variances were the machine and operating system. The configurations are as follows: Server: Operating System: Windows Server2008 R2 Standard 64-bit (6.1, Build7601) SP1 Service Pack: System Manufacturer: IBM System Model: System x3550 M3-[7944AC1]- BIOS: Default System BIOS Processor: Intel® Xeon® CPU E5620@ 2.4GHz (16 CPUs) Memory: 16384MB Notebook: Operating System: Windows 7 Home Premium Standard 64-bit (6.1, Build7601) System Manufacturer: Hewlett-Packard System Model: HP Pavilion dv7 Notebook PC BIOS: Default System BIOS Processor: AMD Phenom II N640 Dual-Core Processor 2.9GHz (2 CPUs) Memory: 6144MB Desktop: Operating System: Windows 7 Professional 64-bit (6.1, Build7601) SP1 System Manufacturer: Dell Inc. System Model: OptiPlex 960 BIOS: Phoenix ROM BIOS PLUS Version 1.10 A11 Processor: Intel Core™2 Quad CPU Q9650 @3.00GHZ (4 CPUs) Memory: 16384MB Processing results per machine: The applications were executed seven times with the averages being displayed below. Machine Text Records Invoices Generated Converted Per Minute Per Minute Server (1) 650 12 Notebook 980 17 Desktop 2,100 45 (1) The server is dedicated to execution of this application; no additional applications are being executed. Question: Why is the server performance so poor? What can be done to improve the speed of the server?

    Read the article

  • Sony VGN-NR260E "External Device Boot"

    - by user72158
    [A LITTLE BACKGROUND] On all modern Dell computers pushing the F12 on bios boot will allow for a screen that lets you choose what boot option you need. For example if I want to boot off of a USB flash drive to boot into a live Linux distribution in order to clean virus's on netbooks that do not have CD drives to boot from I would push F12 and choose USB device from the list of options. If this does not show up then I can always go to the F2 bios setup and choose flash drive to be the first option. When I restart the computer it will boot into the flash device. I understand that I can purchase an external USB CD drive and then boot from that. I do not want to use that option. The reason for using a flash device instead of a CD is: A: This USB flash device has several different boot OS's on it that are used. B: The antivirus disks are updated often and burning cd's and throwing away others is wasteful compared to simply updating a flash drive. There is nothing wrong with the flash drive. It works perfect on many other PC's. [PROBLEM] Booting this flashdrive has been working for years on hundreds of computers... I just have this ONE computer that I cannot figure out how to get it to boot on... I have a Sony Vaio that will not boot to this device. I've tried pushing every key combo I can think of (F12, Esc, Del, F10...) and none of these key combinations will bring up the boot menu. I chose F2 and went into the bios and changed the first boot device to USB flash device. This did not work either. There is an astrix next to the device and the note states: "This Drive is available when External Device Boot is Enable." [WHAT I NEED] I need to know How to enable External Device Boot on the Sony Vaio VGN-NR260E laptop. OR How to bring up the Boot Menu to allow me to boot off a flash device. Thanks for anyone that can help!

    Read the article

  • Domain: Netlogon event sequence

    - by Bob
    I'm getting really confused, reading tutorials from SAMBA howto, which is hell of a mess. Could you write step-by-step, what events happen upon NetLogon? Or in particular, I can't get these things: I really can't get the mechanism of action of LDAP and its role. Should I think of Active Directory LDS as of its superset? What're the other roles of AD and why this term is nearly a synonym of term "domain"? What's the role of LDAP in the remote login sequence? Does it store roaming user profiles? Does it store anything else? How it is called (are there any upper-level or lower-level services that use it in the course of NetLogon)? How do I join a domain. On the client machine I just use the Domain Controller admin credentials, but how do I prepare the Domain Controller for a new machine to join it. What's that deal of Machine trust accounts? How it is used? Suppose, I've just configured a machine to join a domain, created its machine trust, added its data to the domain controller. How would that machine find WINS server to query it for Domain Controller NetBIOS name? Does any computer name, ending with <1C type, correspond to domain controller? In what cases Kerberos and LM/NTLM are used for authentication? Where are password hashes stored in, say, Windows2000 domain controller? Right in the registry? What is SAM - is it a service, responsible for authentication and sending/storing those passwords and accompanying information, such as groups policies etc.? Who calls it? Does it use Active Directory? What's the role of NetBIOS except by name service? Can you exemplify a scenario of its usage as a "datagram distribution service for connectionless communication" or "session service for connection-oriented communication"? (quoted taken from http://en.wikipedia.org/wiki/NetBIOS_Frames_protocol description of NetBIOS roles) Thanks and sorry for many questions.

    Read the article

  • Thinkpad speaker turns mute - Linux Codec issue?

    - by Curlew
    At some point a few days ago the speakers on my Lenovo Thinkpad T410 (Model number: 2537A11) suddenly stopped working randomly. This error happens every time I watch a video or listen to a music file. The sound just abruptly stops. At the moment, I can't produce a single sound no matter what I do. I am using Debian GNU/Linux on this laptop and there doesn't appear to be anything else wrong (the fan is working, no abnormal heat (staying around ~40°C), no other obvious errors or problems). Here is the output of a nice program someone pointed me to: martin@martin:~/Downloads$ sudo python run.py --monitor Using temporary directory: /dev/shm/hda-analyzer You may remove this directory when finished or if you like to download the most recent copy of hda-analyzer tool. Downloading file hda_analyzer.py Downloading file hda_guilib.py Downloading file hda_codec.py Downloading file hda_proc.py Downloading file hda_graph.py Downloading file hda_mixer.py Downloaded all files, executing hda_analyzer.py Watching 1 cards ====================================== Sound is working normally and then it stops and the following lines appear: Diff for codec 0/0 (0x14f15069): --- +++ @@ -164,17 +164,17 @@ Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Pincap 0x00000010: OUT Pin Default 0x901701f0: [Fixed] Speaker at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT - Power: setting=D0, actual=D0 + Power: setting=D3, actual=D3 Connection: 2 0x10* 0x11 Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE And now there is also an error in the dmesg output hda-intel: IRQ timing workaround is activated for card #0. Suggest a bigger bdl_pos_adj. I changed the bdl_pos_adj to various numbers (-1, 0, 64, 1024) and either there is no change at all or dmesg reports that the adjustment is too big. I wonder if this bdl_pos_adj is the real reason for the error. Here is my hardware information provided by alsa-info.sh website. Okay, i did some serious testing and even installed Windows and now i officially conclude that this is a hard-ware related issue with my Laptop speakers. Reason: The error occurs in my installed Debian Linux, an Ubuntu Live distribution and Windows XP No error-message appears in all of the OS. The sound just keeps running and i can't hear a thing. I tested different setups, including OSS, ALSA and the pulseaudio server on top If i use my new usb-headphones i can hear sound all the time without any sudden silences. So obviously, although hard to believe, my laptop speakers are not okay (never heard of similar cases). I'll award the bounty to anyone who can point me to good tutorials or the procedure how to exchange my T410 speakers (i still have warranty. The laptop was bought in Germany, but now i am in Denmark). Or to someone who can explain me the output from hda-analyzer (big log above).

    Read the article

  • Calling cdecl Functions That Have Different Number of Arguments

    - by KlaxSmashing
    I have functions that I wish to call based on some input. Each function has different number of arguments. In other words, if (strcmp(str, "funcA") == 0) funcA(a, b, c); else if (strcmp(str, "funcB") == 0) funcB(d); else if (strcmp(str, "funcC") == 0) funcC(f, g); This is a bit bulky and hard to maintain. Ideally, these are variadic functions (e.g., printf-style) and can use varargs. But they are not. So exploiting the cdecl calling convention, I am stuffing the stack via a struct full of parameters. I'm wondering if there's a better way to do it. Note that this is strictly for in-house (e.g., simple tools, unit tests, etc.) and will not be used for any production code that might be subjected to malicious attacks. Example: #include <stdio.h> typedef struct __params { unsigned char* a; unsigned char* b; unsigned char* c; } params; int funcA(int a, int b) { printf("a = %d, b = %d\n", a, b); return a; } int funcB(int a, int b, const char* c) { printf("a = %d, b = %d, c = %s\n", a, b, c); return b; } int funcC(int* a) { printf("a = %d\n", *a); *a *= 2; return 0; } typedef int (*f)(params); int main(int argc, char**argv) { int val; int tmp; params myParams; f myFuncA = (f)funcA; f myFuncB = (f)funcB; f myFuncC = (f)funcC; myParams.a = (unsigned char*)100; myParams.b = (unsigned char*)200; val = myFuncA(myParams); printf("val = %d\n", val); myParams.c = (unsigned char*)"This is a test"; val = myFuncB(myParams); printf("val = %d\n", val); tmp = 300; myParams.a = (unsigned char*)&tmp; val = myFuncC(myParams); printf("a = %d, val = %d\n", tmp, val); return 0; } Output: gcc -o func func.c ./func a = 100, b = 200 val = 100 a = 100, b = 200, c = This is a test val = 200 a = 300 a = 600, val = 0

    Read the article

  • Spring's JdbcDaoSupport (using MySQL Connector/J) fails after executing sql that adds FK

    - by John
    I am using Spring's JdbcDaoSupport class with a DriverManagerDataSource using the MySQL Connector/J 5.0 driver (driverClassName=com.mysql.jdbc.driver). allowMultiQueries is set to true in the url. My application is an in-house tool we recently developed that executes sql scripts in a directory one-by-one (allows us to re-create our schema and reference table data for a given date, etc, but I digress). The sql scripts sometime contain multiple statements (hence allowMultiQueries), so one script can create a table, add indexes for that table, etc. The problem happens when including a statement to add a foreign key constraint in one of these files. If I have a file that looks like... --(column/constraint names are examples) CREATE TABLE myTable ( fk1 BIGINT(19) NOT NULL, fk2 BIGINT(19) NOT NULL, PRIMARY KEY (fk1, fk2) ); ALTER TABLE myTable ADD CONSTRAINT myTable_fk1 FOREIGN KEY (fk1) REFERENCES myOtherTable (id) ; ALTER TABLE myTable ADD CONSTRAINT myTable_fk2 FOREIGN KEY (fk2) REFERENCES myOtherOtherTable (id) ; then JdbcTemplate.execute throws an UncategorizedSqlException with the following error message and stack trace: Exception in thread "main" org.springframework.jdbc.UncategorizedSQLException: StatementCallback; uncategorized SQLException for SQL [ THE SQL YOU SEE ABOVE LISTED HERE ]; SQL state [HY000]; error code [1005]; Can't create table 'myDatabase.myTable' (errno: 150); nested exception is java.sql.SQLException: Can't create table 'myDatabase.myTable' (errno: 150) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80) and the table and foreign keys are not inserted. Also, especially weird: if I take the foreign key statements out of the script I showed above and then place them in their own script that executes after (so I now have 1 script with just the create table statement, and 1 script with the add foreign key statements that executes after that) then what happens is: tool executes create table script, works fine, table is created tool executes add fk script, throws the same exception as seen above (except errno=121 this time), but the FKs actually get added (!!!) In other words, when the create table/FK statements are in the same script then the exception is thrown and nothing is created, but when they are different scripts a nearly identical exception is thrown but both things get created. Any help on this would be greatly appreciated. Please let me know if you'd like me to clarify anything more.

    Read the article

  • Single domain name potentially resolving to multiple servers

    - by Jace
    first time here at Server Fault, and I apologize in advance that this domain stuff is not really my strength. Any and all suggestions are much appreciated. I am completely lost and incredibly tired! I've inherited an incredibly convoluted system from my predecessor, and I'm trying to find a way to solve it - or I need to be told that it just isn't possible. I've got an old site on ServerA (some kind of Linux distribution), with the domain SomeDomain.com There is a new site sitting on ServerB (Ubuntu), with the intention of having SomeDomain.com to serve it in the future (it is replacing the old site) ServerA also has a web app that is currently in use by other departments within the company (accessible at SomeDomain.com/web-app/) The goal: To have SomeDomain.com and all extensions of this domain name (sub-domains, URL's etc.) serve the new site on ServerB. BUT, the URL SomeDomain.com/web-app/ must serve the Web App on ServerA. The Catch: The ServerA is a shared server with a hosting company with VERY limiting restrictions in place - I cannot adjust DNS settings (apart from Name servers - but cannot set A records or anything, I have full access to ServerB to do as I wish). Therefore the web-app MUST be served from SomeDomain.com/web-app/ and not from a sub-domain or anything. These limitations make migrating the web-app from Server A to Server B rather undesirable, AND this web-app will be replaced in the near future, so it isn't worth the effort right now. Therefore, ultimately I will want 1 domain name to resolve to Server B's IP address most of the time, but in the event that the URL is SomeDomain.com/web-app/, it should resolve to Server A's IP. Note: The domain names don't, technically, have to resolve to one IP or another - but ultimately the URL's must stay consistent Some things I have tried: I've looked into mod_rewrite and .htaccess to try and achieve this effect, but it doesn't look like it's going to work for me - but I may have done it wrong (On Server B, I just checked if the request URI was /web-app/ and tried to serve the /web-app/ folder on Server A) I do have the ability to modify the name servers on both servers I am not able to make a sub domain on Server A that points back to Server A (I assume because the hosting company's servers use the URL to determine what site the serve). I figured this could be good as I'd could set an A record on Server B to point to the web app on Server A - but alas, Server A requires SomeDomain.com. If there is any more information I can give, please let me know. I need a nudge in the right direction, ideas or a solution.

    Read the article

  • Which is the "best" data access framework/approach for C# and .NET?

    - by Frans
    (EDIT: I made it a community wiki as it is more suited to a collaborative format.) There are a plethora of ways to access SQL Server and other databases from .NET. All have their pros and cons and it will never be a simple question of which is "best" - the answer will always be "it depends". However, I am looking for a comparison at a high level of the different approaches and frameworks in the context of different levels of systems. For example, I would imagine that for a quick-and-dirty Web 2.0 application the answer would be very different from an in-house Enterprise-level CRUD application. I am aware that there are numerous questions on Stack Overflow dealing with subsets of this question, but I think it would be useful to try to build a summary comparison. I will endeavour to update the question with corrections and clarifications as we go. So far, this is my understanding at a high level - but I am sure it is wrong... I am primarily focusing on the Microsoft approaches to keep this focused. ADO.NET Entity Framework Database agnostic Good because it allows swapping backends in and out Bad because it can hit performance and database vendors are not too happy about it Seems to be MS's preferred route for the future Complicated to learn (though, see 267357) It is accessed through LINQ to Entities so provides ORM, thus allowing abstraction in your code LINQ to SQL Uncertain future (see Is LINQ to SQL truly dead?) Easy to learn (?) Only works with MS SQL Server See also Pros and cons of LINQ "Standard" ADO.NET No ORM No abstraction so you are back to "roll your own" and play with dynamically generated SQL Direct access, allows potentially better performance This ties in to the age-old debate of whether to focus on objects or relational data, to which the answer of course is "it depends on where the bulk of the work is" and since that is an unanswerable question hopefully we don't have to go in to that too much. IMHO, if your application is primarily manipulating large amounts of data, it does not make sense to abstract it too much into objects in the front-end code, you are better off using stored procedures and dynamic SQL to do as much of the work as possible on the back-end. Whereas, if you primarily have user interaction which causes database interaction at the level of tens or hundreds of rows then ORM makes complete sense. So, I guess my argument for good old-fashioned ADO.NET would be in the case where you manipulate and modify large datasets, in which case you will benefit from the direct access to the backend. Another case, of course, is where you have to access a legacy database that is already guarded by stored procedures. ASP.NET Data Source Controls Are these something altogether different or just a layer over standard ADO.NET? - Would you really use these if you had a DAL or if you implemented LINQ or Entities? NHibernate Seems to be a very powerful and powerful ORM? Open source Some other relevant links; NHibernate or LINQ to SQL Entity Framework vs LINQ to SQL

    Read the article

  • Best way to version control a WCF application with Git?

    - by Sam
    Suppose I have the following projects. The format is [ProjectName] : [ProjectDependency1, ProjectDependency2, etc.] // Service CoolLibrary WcfApp.Core WcfApp.Contracts WcfApp.Services : CoolLibrary, WcfApp.Core, WcfApp.Contracts // Clients CustomerX.App : WcfApp.Contracts CustomerY.App : WcfApp.Contracts CustomerZ.App : WcfApp.Contracts (On a side note, WcfApp.Contracts should not depend on WcfApp.Core, right? Else CustomerX.App would also depend on and thus be exposed to the service domain model?) (CoolLibrary is shared with other applications, so I can't just put it inside of WcfApp.Services.) All of this code is in-house. I was thinking of having 6 repositories for this. The format is [repository folder name] : [Projects included in repository.] 1. CoolLibrary.git : CoolLibrary 2. WcfApp.Contracts.git : WcfApp.Contracts 3. WcfApp.git : WcfApp.Core, WcfApp.Services 4. CustomerX.App.git : CustomerX.App 5. CustomerY.App.git : CustomerY.App 6. CustomerZ.App.git : CustomerZ.App How should I manage my project dependencies? I see three options: I could use binaries which I have to manually copy to each dependent repository. This would be easiest at the start, but my repositories would be a little bloated, and it'd become more tedious as I add more client apps for customers. I could import dependent code as submodules. This is what I will probably end up doing, although I keep reading on the web that submodules are a hassle. I also read that I can use something called the subtree merge strategy, but I am not sure how it is different from just cloning the repo into a subdirectory and adding the subdirectory to .gitignore. Is the difference that the subtree is recorded in the master repository, so (for example) cloning it from a different location will also pull the subtree? I know I asked a lot of questions in this post, but the most important two questions I have are: 1. Am I using the right number and layout of repositories? Should I use less or more? 2. Which of the three dependency management strategies would you recommend? Is there another strategy I haven't considered?

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • Insert Registration Data in MySQL using PHP

    - by J M 4
    I may not be asking this in the best way possible but i will try my hardest. Thank you ahead of time for your help: I am creating an enrollment website which allows an individual OR manager to enroll for medical testing services for professional athletes. I will NOT be using the site as a query DB which anybody can view information stored within the database. The information is instead simply stored, and passed along in a CSV format to our network provider so they can use as needed after the fact. There are two possible scenarios: Scenario 1 - Individual Enrollment If an individual athlete chooses to enroll him/herself, they enter their personal information, submit their payment information (credit/bank account) for processing, and their information is stored in an online database as Athlete1. Scenario 2 - Manager Enrollment If a manager chooses to enroll several athletes he manages/ promotes for, he enters his personal information, then enters the personal information for each athlete he wishes to pay for (name, address, ssn, dob, etc), then submits payment information for ALL athletes he is enrolling. This number can range from 1 single athlete, up to 20 athletes per single enrollment (he can return and complete a follow up enrollment for additional athletes). Initially, I was building the database to house ALL information regardless of enrollment type in a single table which housed over 400 columns (think 20 athletes with over 10 fields per athlete such as name, dob, ssn, etc). Now that I think about it more, I believe create multiple tables (manager(s), athlete(s)) may be a better idea here but still not quite sure how to go about it for the following very important reasons: Issue 1 If I list the manager as the parent table, I am afraid the individual enrolling athlete will not show up in the primary table and will not be included in the overall registration file which needs to be sent on to the network providers. Issue 2 All athletes being enrolled by a manager are being stored in SESSION as F1FirstName, F2FirstName where F1 and F2 relate to the id of the fighter. I am not sure technically speaking how to store multiple pieces of information within the same table under separate rows using PHP. For example, all athleteswill have a first name. The very basic theory of what i am trying to do is: If number_of_athletes 1, store F1FirstName in row 1, column 1 of Table "Athletes"; store F1LastName in row 1, column 2 of Table "Athletes"; store F2FirstName in row 2, column 1 of Table "Athletes"; store F2LastName in row 2, column 2 of table "Athletes"; Does this make sense? I know this question is very long and probably difficult so i appreciate the guidance.

    Read the article

  • PHP Shared Sessions across Domain

    - by bigstylee
    Hi, I have seen a few answers to this on SOO but most of these are concerned with the use of subdomains, of which none have worked for me. The common one being that the use of session.cookie_domain, which from my understanding will only work with subdomains. I am interested in a solution that deals with deals with entirely different domains (and includes the possibility of subdomains). Unfortunately project deadlines being what they are, time is not on my side, so I turn to SOO's expertise and experience. The current project brief is to be able to log into one site which currently only stores the user_id in the session and then be able to retrieve this value while on a different domain within the same server enviroment. Session data is being stored/retrieved from a database where the session id is the primary key. I am hoping to find a "light wieght" and "easy" to implement solution. The system is utlising an in-house Model View Controller design pattern, so all requests (including different domains) are run through a single bootstrap script. Using the domain name as a variable, this determines what context to display to the user. One option that did look like to have potential is the use of a hidden image and using the alt tag to set the user id. My first impressions suggest this immediately seems "too easy" (if possible) and riddled with security flaws. Disscuss? Another option which I considered is using the IP and User Agent for authentication but again I feel this not going to be a reliable option due to shared networks and changing IP addresses. My third option (and preferred) which I considered and as yet not seen discussed is using htaccess to fool the user into thinking that they are on a different domain when infact apache is redirecting; something like www.foo.com/index.php?domain=bar.com&controller=news/categoires/1 but displays to the user as www.bar.com/news/categories/1 foo.com represents the "main site domain" which all requests are run through and bar.com is what the user thinks they are accessing. The controller request dictates the page and view being requested. Is this possible? Are there other options? Pros/Cons? Thanks in advanced!!!

    Read the article

  • Identifying if a user is in the local administrators group

    - by Adam Driscoll
    My Problem I'm using PInvoked Windows API functions to verify if a user is part of the local administrators group. I'm utilizing GetCurrentProcess, OpenProcessToken, GetTokenInformationand LookupAccountSid to verify if the user is a local admin. GetTokenInformation returns a TOKEN_GROUPS struct with an array of SID_AND_ATTRIBUTES structs. I iterate over the collection and compare the user names returned by LookupAccountSid. My problem is that, locally (or more generally on our in-house domain), this works as expected. The builtin\Administrators is located within the group membership of the current process token and my method returns true. On another domain of another developer the function returns false. The LookupAccountSid functions properly for the first 2 iterations of the TOKEN_GROUPS struct, returning None and Everyone, and then craps out complaining that "A Parameter is incorrect." What would cause only two groups to work correctly? The TOKEN_GROUPS struct indicates that there are 14 groups. I'm assuming it's the SID that is invalid. Everything that I have PInvoked I have taken from an example on the PInvoke website. The only difference is that with the LookupAccountSid I have changed the Sid parameter from a byte[] to a IntPtr because SID_AND_ATTRIBUTESis also defined with an IntPtr. Is this ok since LookupAccountSid is defined with a PSID? LookupAccountSid PInvoke [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] static extern bool LookupAccountSid( string lpSystemName, IntPtr Sid, StringBuilder lpName, ref uint cchName, StringBuilder ReferencedDomainName, ref uint cchReferencedDomainName, out SID_NAME_USE peUse); Where the code falls over for (int i = 0; i < usize; i++) { accountCount = 0; domainCount = 0; //Get Sizes LookupAccountSid(null, tokenGroups.Groups[i].SID, null, ref accountCount, null, ref domainCount, out snu); accountName2.EnsureCapacity((int) accountCount); domainName.EnsureCapacity((int) domainCount); if (!LookupAccountSid(null, tokenGroups.Groups[i].SID, accountName2, ref accountCount, domainName, ref domainCount, out snu)) { //Finds its way here after 2 iterations //But only in a different developers domain var error = Marshal.GetLastWin32Error(); _log.InfoFormat("Failed to look up SID's account name. {0}", new Win32Exception(error).Message); continue; } If more code is needed let me know. Any help would be greatly appreciated.

    Read the article

  • Week in Geek: USDA Chooses Microsoft for Cloud Services Edition

    - by Asian Angel
    This week we learned how to create geeky LED holiday lights with old bottles, dig deeper in Windows Defrag via the command prompt, use Google Chrome’s drag/drop feature to upload files easier, find great gift recommendations by looking through the How-To Geek holiday gift guide, and have fun adding Merry Christmas fonts to our computers. Photo by ntr23. Random Geek Links It has been a busy week, so we have extra news link goodness with information that is good for you to know. USDA making the move to Microsoft The U.S. Department of Agriculture has announced that it has chosen Microsoft to host things like e-mail, instant messaging, and collaboration through the software giant’s Business Productivity Online Suite. Google says it was cut off from USDA project bid Google is claiming that it was not given a chance to bid on a cloud-computing project for the U.S. Department of Agriculture, for which the contract was awarded to rival Microsoft. Apache is being forced into a Java Fork When Oracle rolled over Apache and Google’s objections to its Java plans in December, the scene was set for Apache to leave and, eventually, force a Java code fork. Tumblr explains daylong outage After experiencing an outage that started on Sunday afternoon and stretched through most of the day yesterday, Tumblr has explained what happened. Google demos Chrome OS, launches pilot program During a press briefing this week in San Francisco, Google launched the Chrome application store and demonstrated Chrome OS, its browser-centric netbook operating system. Don’t expect Spotify in U.S. this holiday season As of last week, Spotify had yet to sign a single licensing deal with a major label, after spending more than a year negotiating, multiple music sources told CNET. December 2010 Patch Tuesday will come with most bulletins ever According to the Microsoft Security Response Center, Microsoft will issue 17 Security Bulletins addressing 40 vulnerabilities on Tuesday, December 14. It will also host a webcast to address customer questions the following day. Hacker plants back door in Symbian firmware Indian hacker Atul Alex has had a look at the firmware for Symbian S60 smartphones and come up with a back door for it. PC quarantines raise tough complexities The concept of quarantining PCs to prevent widespread infection is “interesting, but difficult to implement, with far too many problems”, said security experts. Symantec: DDoS attacks hard to defend It has surfaced that the distributed denial of service (DDoS) attacks on Visa and MasterCard Web sites on Wednesday were carried out by a toolkit known as low orbit ion cannon (LOIC). Web Sockets and the risks of unfinished standards Enthusiasm for a promising new standard called Web Sockets has quickly cooled in some quarters as a potential security problem led some browser makers to hastily postpone support. Internet Explorer 9 to get tracking protection Microsoft is making changes to Internet Explorer 9’s security features that will better enable users to keep sites from tracking their activity across browsing sessions. NASA sold PCs with sensitive data NASA failed to remove sensitive data from computers that it sold, according to an audit report released this week. Cybercrooks create fake Amazon receipts The bad guys have created yet another online scam, this one involving fake Amazon receipts. World of Warcraft character move fees waived Until December 22, Blizzard will allow free realm transfers from 25 highly populated servers to alleviate log-in queues or performance issues. (The free transfers are one-way and one-time only.) SpaceX Dragon reaches orbit atop a Falcon with a fiery tail The Space Exploration Technologies corporation has become the first nongovernmental entity to put a vehicle into low Earth orbit. Geek Video of the Week If birds have wings, then why are the Angry Birds using slingshots? Photo by Dorkly Bits. Wait… Birds have Wings, Why are the Angry Ones Using Slingshots? Sysadmin Geek Tips How To Setup Email Alerts on Linux Using Gmail or SMTP Linux machines may require administrative intervention in countless ways, but without manually logging into them how would you know about it? Here’s how to setup emails to get notified when your machines want some tender love and attention. Random TinyHacker Links Red Panda Webcam Support Firefox and the Knoxville Zoo’s Red Panda program. Christmas Icons (Icons we like) Superb set of holiday icons by lgp85 at deviantArt. Download the .zip and use as .png or convert to .ico at Convertico.com or with tiny app Imagicon. Super User Questions Enjoy reading the great answers to this week’s popular questions from Super User Useful USB boot disks? DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder? What are other ways to backup my files if I do not have an external drive? Anti virus what is the difference between these all? How can I block all Facebook elements/content? How-To Geek Weekly Article Recap Have you had a busy week between work and preparing for the holidays? Get caught up on your HTG reading with our hottest articles of the week. 20 Windows Keyboard Shortcuts You Might Not Know The 50 Best Registry Hacks that Make Windows Better LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology HTG Explains: Which Linux File System Should You Choose? How to Use and Customize Google Chrome Web Apps One Year Ago on How-To Geek This week’s batch of retro geeky goodness is all about customizing Windows 7. ClassicShell Adds Classic Start Menu and Explorer Features to Windows 7 Get an Aero-Styled Classic Start Menu in Windows 7 Customize the Windows 7 Logon Screen Get the Classic Style Network Activity Indicator Back in Windows 7 How To Enable Check Boxes for Items In Windows 7 The Geek Note We would like you to join us in welcoming Jason Fitzpatrick to the writing staff here at How-To Geek. He started with us this past week, so take some time to read through his articles about the Wii, Kindle, & PlayStation 2 Peripherals and leave a friendly comment to say “Hi”! Got a great tip to share? Make sure to send it in to us at [email protected]. Photo by real00. Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Settle into Orbit with the Voyage Theme for Chrome and Iron Awesome Safari Compass Icons Set Escape from the Exploding Planet Wallpaper Move Your Tumblr Blog to WordPress Pytask is an Easy to Use To-Do List Manager for Your Ubuntu System Snowy Christmas House Personas Theme for Firefox

    Read the article

  • How To Activate Your Free Office 2007 to 2010 Tech Guarantee Upgrade

    - by Matthew Guay
    Have you purchased Office 2007 since March 5th, 2010?  If so, here’s how you can activate and download your free upgrade to Office 2010! Microsoft Office 2010 has just been released, and today you can purchase upgrades from most retail stores or directly from Microsoft via download.  But if you’ve purchased a new copy of Office 2007 or a new computer that came with Office 2007 since March 5th, 2010, then you’re entitled to an absolutely free upgrade to Office 2010.  You’ll need enter information about your Office 2007 and then download the upgrade, so we’ll step you through the process. Getting Started First, if you’ve recently purchased Office 2007 but haven’t installed it, you’ll need to go ahead and install it before you can get your free Office 2010 upgrade.  Install it as normal.   Once Office 2007 is installed, run any of the Office programs.  You’ll be prompted to activate Office.  Make sure you’re connected to the internet, and then click Next to activate. Get your Free Upgrade to Office 2010 Now you’re ready to download your upgrade to Office 2010.  Head to the Office Tech Guarantee site (link below), and click Upgrade now. You’ll need to enter some information about your Office 2007.  Check that you purchased your copy of Office 2007 after March 5th, select your computer manufacturer, and check that you agree to the terms. Now you’re going to need the Product ID number from Office 2007.  To find this, open Word or any other Office 2007 application.  Click the Office Orb, and select Options on the bottom. Select the Resources button on the left, and then click About. Near the bottom of this dialog, you’ll see your Product ID.  This should be a number like: 12345-123-1234567-12345   Go back to the Office Tech Guarantee signup page in your browser, and enter this Product ID.  Select the language of your edition of Office 2007, enter the verification code, and then click Submit. It may take a few moments to validate your Product ID. When it is finished, you’ll be taken to an order page that shows the edition of Office 2010 you’re eligible to receive.  The upgrade download is free, but if you’d like to purchase a backup DVD of Office 2010, you can add it to your order for $13.99.  Otherwise, simply click Continue to accept. Do note that the edition of Office 2010 you receive may be different that the edition of Office 2007 you purchased, as the number of editions has been streamlined in the Office 2010 release.  Here’s a chart you can check to see what edition you’ll receive.  Note that you’ll still be allowed to install Office on the same number of computers; for example, Office 2007 Home and Student allows you to install it on up to 3 computers in the same house, and your Office 2010 upgrade will allow the same. Office 2007 Edition Office 2010 Upgrade You’ll Receive Office 2007 Home and Student Office Home and Student 2010 Office Basic 2007Office Standard 2007 Office Home and Business 2010 Office Small Business 2007Office Professional 2007Office Ultimate 2007 Office Professional 2010 Office Professional 2007 AcademicOffice Ultimate 2007 Academic Office Professional Academic 2010 Sign in with your Windows Live ID, or create a new one if you don’t already have one. Enter your name, select your country, and click Create My Account.  Note that Office will send Office 2010 tips to your email address; if you don’t wish to receive them, you can unsubscribe from the emails later.   Finally, you’re ready to download Office 2010!  Click the Download Now link to start downloading Office 2010.  Your Product Key will appear directly above the Download link, so you can copy it and then paste it in the installer when your download is finished.  You will additionally receive an email with the download links and product key, so if your download fails you can always restart it from that link. If your edition of Office 2007 included the Office Business Contact Manager, you will be able to download it from the second Download link.  And, of course, even if you didn’t order a backup DVD, you can always burn the installers to a DVD for a backup.   Install Office 2010 Once you’re finished downloading Office 2010, run the installer to get it installed on your computer.  Enter your Product Key from the Tech Guarantee website as above, and click Continue. Accept the license agreement, and then click Upgrade to upgrade to the latest version of Office.   The installer will remove all of your Office 2007 applications, and then install their 2010 counterparts.  If you wish to keep some of your Office 2007 applications instead, click Customize and then select to either keep all previous versions or simply keep specific applications. By default, Office 2010 will try to activate online automatically.  If it doesn’t activate during the install, you’ll need to activate it when you first run any of the Office 2010 apps.   Conclusion The Tech Guarantee makes it easy to get the latest version of Office if you recently purchased Office 2007.  The Tech Guarantee program is open through the end of September, so make sure to grab your upgrade during this time.  Actually, if you find a great deal on Office 2007 from a major retailer between now and then, you could also take advantage of this program to get Office 2010 cheaper. And if you need help getting started with Office 2010, check out our articles that can help you get situated in your new version of Office! Link Activate and Download Your free Office 2010 Tech Guarantee Upgrade Similar Articles Productive Geek Tips Remove Office 2010 Beta and Reinstall Office 2007Upgrade Office 2003 to 2010 on XP or Run them Side by SideCenter Pictures and Other Objects in Office 2007 & 2010Change the Default Color Scheme in Office 2010Show Two Time Zones in Your Outlook 2007 Calendar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >