Search Results

Search found 22794 results on 912 pages for 'bit cost'.

Page 78/912 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Cost to GC of using weak references in C#?

    - by Scott Bilas
    In another question, Stephen C says: A second concern is that there are runtime overheads with using weak references. The obvious costs are those of creating weak references and calling get on them. A less obvious cost is that significant extra work needs to be done each time the GC runs. So what exactly is the cost to the GC of a weak ref? What extra work does it need to do, and how big of a deal is it? I can make some educated guesses, but am interested in the actual mechanics.

    Read the article

  • Is the time cost constant when bulk inserting data into an indexed table?

    - by SiLent SoNG
    I have created an archive table which will store data for selecting only. Daily there will be a program to transfer a batch of records into the archive table. There are several columns which are indexed; while others are not. I am concerned with time cost per batch insertion: - 1st batch insertion: N1 - 2nd batch insertion: N2 - 3rd batch insertion: N3 The question is: will N1, N2, and N3 roughly be the same, or N3 N2 N1? That is, will the time cost be a constant or incremental, with existence of several indexes? All indexes are non-clustered. The archive table structure is this: create table document ( doc_id int unsigned primary key, owner_id int, -- indexed title smalltext, country char(2), year year(4), time datetime, key ix_owner(owner_id) }

    Read the article

  • top Tweets SOA Partner Community – May 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community BPMN2.0 Oracle notations poster from eaiesb http://wp.me/p10C8u-pu Torsten WinterbergLook out for new Oracle #BPM edition coming up soon: The Oracle BPM Standard edtion! Great news for easy entry, small licence fees. Yes! Danilo Schmiedel Had a great chat with customer yesterday about #OracleBPM. Next step will be a 5day event combining modeling and implementation @soacommunity Frank Nimphius Still reading "Oracle Business Process Management Suite 11g Handbook". Excellent resource for a non-SOA but ADF guy like me ;-) Oracle New webcast: Maximize #Oracle #WebLogic Server ROI with Oracle #Enterprise #Manager 12c on May 2 at 10 am PT. Register http://bit.ly/JFUrR9 OTNArchBeat@OTNArchBeat BPM in Financial Services Industry | Sanjeev Sharma http://bit.ly/HCCxui JDeveloper & ADF BPEL 11.1.1.6 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations http://dlvr.it/1V9SxR Oracle UPK & Tutor Collaborate Attendees: Visit the UPK demo pod, SIGS, and sessions: If you are attending Collaborate 2012 - Sun. http://bit.ly/J39z65 Heidi Buelow see #fmw track RT @demed: Are you going to #KSCOPE12 in San Antonio, June 24-28? http://kscope12.com/component/ seminar/seminarslist?topicsid=6 Use promo code Fusion for discount! Sabine Leitner #SIG #Middleware 15.05. Frankfurt #Oracle #DOAG Planung & Aufbau WebLogic Server #WLS http://bit.ly/HKsCWV @OracleWebLogic @soacommunity SOA Community MDS explorer by Red Samurai http://wp.me/p10C8u-pp Biemond &reg; Retrieve or set a HTTP header from Oracle BPEL: With Oracle SOA Suite 11g patch 12928372 you can finally retrie http://bit.ly/JejTHC Lucas Jellema Call for papers for UKOUG 2012 has opened: http://techandebs.ukoug.org /default.asp?p=9306 (deadline 1st of June) OTNArchBeat BPM API usage: List all BPM Processes for a user | Kavitha Srinivasan http://bit.ly/IJKVfj demed SOA, Cloud + Service Tech symposium (London, Sep 24-25) call for paper is open http://www.servicetechsymposium. com /call2012.php @techsymp #oraclesoa OracleBlogs Lessons learned configuring OER 11g Workflows http://ow.ly/1iMsKh OTNArchBeat Scripting WebLogic Admin Server Startup | Antony Reynolds http://bit.ly/IH5ciU orclateamsoa A-Team Blog #ateam: BPM API usage: List all BPM Processes for a user http://ow.ly/1iJADp Lucas Jellema Just blogged about our Live FMW Application Development show during OBUG 2012, next Tuesday 24th April in Maastricht: OracleBlogs OEG integration with OSB/OWSM - 11g http://ow.ly/1iKx7G SOA Community SOA Community Newsletter April 2012 http://wp.me/p10C8u-pl Frank DorstRT @whitehorsesnl: Whiteblog: BPM Process Spaces in Oracle Webcenter (Patch Set 5(http://bit.ly/Hxzh29) #soacommunity #bpm #oracle) David Shaffer The Advanced SOA suite training class next week in Redwood City is full! Learned a lot about accepting credit card payments. OTNArchBeat Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5 | Shub Lahiri http://bit.ly/IgI8GN SOA Community Oracle Fusion Middleware Innovation Awards 2012, Call for Nominations #ofmaward #soa #bpm #soacommunity OTNArchBeat Updated SOA Documents now available in ITSO Reference Library http://bit.ly/I3Y6Sg Oracle Middleware Data Integrator & SOA - why 2 products better than one for integration? Webcast: Apr 24 10 AM PT http://bit.ly/IzmtKR Andrejus Baranovskis Red Samurai MDS Cleaner V2.0 http://fb.me/FxLVz82w SOA Community “@rluttikhuizen: Chapter 4 of SOA Made Simple book "Classification of Services" ready for collegial review” can #soacommunity get a preview? Xavier Verhaeghe #Gartner figures are out: #Oracle top in App Server market share (43.1%) and Relational #Database, too (48.8%) in 2011 Sabine Leitner WLS12c, Exa*, IDM, EM12c, DB @ Private, Public, Hybrid #Cloud Event 26.04. FFM #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity Michel Schildmeijer@wlscommunity @MiddlewareMagic @OTNArchBeat @Oracle_Fusion Oracle WebLogic / SOA Suite 11g HACMP Cluster take-over http://lnkd.in/G78qMd Oracle Middleware Hear how ODI and SOA's unified approach are key to untangling your business. April 24 10AM PT http://bit.ly/IdcsUz #Oracle OTNArchBeat Using SAP Adapter with OSB 11g (PS3) | Shub Lahiri http://bit.ly/IswR9K SOA Community Integrating with Oracle Fusion Applications: Discovering Integration Artifacts https://blogs.oracle.com/governance /entry/integrating_with_oracle_fusion_ applications #soacommunity #oer #governance OracleBlogs Tuning B2B Server Engine Threads in SOA Suite 11g http://ow.ly/1iH5bx OracleBlogs Top Tweets SOA Partner Community April 2012 http://ow.ly/1iVHfA SOA Community Oracle SOA Suite 11g Database Growth Management http://wp.me/p10C8u-pi Sabine Leitner WLS12c,Exa*,IDM,EM12c, DB @ Private, Public, Hybrid #Cloud Event 24.04. München #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity SOA Community Testing Business Rules by Mark Nelson http://redstack.wordpress.com/2012/ 04/18/testing-business-rules/ #soacommunity #soa #rules #oracle SOA CommunityTop Tweets SOA Partner Community - April 2012 http://wp.me/p10C8u-pn OTNArchBeat Webcast: Untangle Your Business with Oracle Unified SOA and Data Integration - April 24 http://bit.ly/IQexqT OTNArchBeat"Do more with SOA Integration: Best of Packt" contributors include @gschmutz, @llaszews, many others http://amzn.to/HVWwYt ServiceTechSymposium Symposium agenda page coming together - page launched today with keynotes, sessions to be added shortly. http://www.servicetechsymposium.com /agenda2012.php SOA Community Shipping Specialization plaques - congratulation #Fujitsu - request yours https://soacommunity.wordpress. com/2011/02/23/who-are-the-soa-experts-specialization-recognized-by-customers/ #soacommunity #OPN http://pic.twitter.com/YMRm2ion ServiceTechSymposium call for Presentations Submission Deadline Moved Up to May 21, 2012. Send your presentations submissions ASAP! ServiceTechSymposium Symposium Keynote by Vicente Navarro, European Space Agency, added to agenda: "SOA & Service-Orientation at the European Space Agency" SOA Community Running a large #soa project? Make sure you read - Oracle SOA Suite 11g Database Growth Management #soacommunity #opn SOA Community List all BPM Processes for a user by Yogesh l #bpm #oracle #soacommunity  For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity, twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • How can I determine which GPU card is running at PCI Express 2.0 x16 & which is using x8?

    - by M. Tibbits
    Is there a way to determine the speed of the PCI Express connection to a specific card? I have three cards plugged in: two Nvidia GTX 480's (one at x16 & and one at x8) one Nvidia GTX 460 running at x8 Is there some way, either by a function call in C or an option to lspci that I can determine the bus speed of the graphics cards? When I only use one of the cards for my CUDA program, I'd like to use the one which is running at x16. Thanks! Note: lspci -vvv dumps out For the two GTX 480s. I don't see any differences that pertain to bus speed. 03:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at df00 [disabled] [size=128] [virtual] Expansion ROM at b8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 03:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Interrupt: pin B routed to IRQ 5 Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] Capabilities: <access denied> 04:00.0 VGA compatible controller: nVidia Corporation Device 06c0 (rev a3) Subsystem: eVga.com. Corp. Device 1480 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] Region 5: I/O ports at cf00 [size=128] [virtual] Expansion ROM at c8000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: nvidia Kernel modules: nvidia, nvidiafb, nouveau 04:00.1 Audio device: nVidia Corporation Device 0be5 (rev a1) Subsystem: eVga.com. Corp. Device 1480 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin B routed to IRQ 5 Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> And the only differences I see relate specifically to the memory mapping: myComputer:~> diff card1 card2 3c3 < Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 7,11c7,11 < Region 0: Memory at d4000000 (32-bit, non-prefetchable) [size=32M] < Region 1: Memory at b0000000 (64-bit, prefetchable) [size=128M] < Region 3: Memory at bc000000 (64-bit, prefetchable) [size=64M] < Region 5: I/O ports at df00 [disabled] [size=128] < [virtual] Expansion ROM at b8000000 [disabled] [size=512K] --- > Region 0: Memory at dc000000 (32-bit, non-prefetchable) [size=32M] > Region 1: Memory at c0000000 (64-bit, prefetchable) [size=128M] > Region 3: Memory at cc000000 (64-bit, prefetchable) [size=64M] > Region 5: I/O ports at cf00 [size=128] > [virtual] Expansion ROM at c8000000 [disabled] [size=512K] 18c18 < Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- --- > Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- 19a20 > Latency: 0, Cache Line Size: 64 bytes 21c22 < Region 0: [virtual] Memory at d7ffc000 (32-bit, non-prefetchable) [disabled] [size=16K] --- > Region 0: Memory at dfffc000 (32-bit, non-prefetchable) [size=16K]

    Read the article

  • 14.04.1 LTS 64 bit from USB does not see my windows 7 when I go to install it

    - by W J
    I suppose this is as much a question as a heads up/warning 14.04.1 LTS only gave me the option of writing over everything on one of my windows 7 machines. If I'd pushed the wrong button and continued I would have lost some mighty important items. On a similar windows laptop I succesfully installed 14.04.1 LTS 32 bit alongside windows 7 rather easily ( and I dig it!), there was a prompt in that case that let me select to install it alongside my windows OS.Yikes, not in this case. This laptop was formatted NTFS, the Ubuntu usb pendrive I formatted fat32. could be a clue? It looks like there is an advanced install ubuntu, but I am not that advanced. I may try to use windows diskformat (What fdisk is gone?) to make a partition, then see if the ubuntu on my usb stick "sees" my windows. If anybody has a better plan let me know. Mahalo AHA!? p.s. its a SSD harddrive, perhaps thats the crux?

    Read the article

  • Oracle Java Products Updates (2013/10/30)

    - by Hiro
    Oracle Java Products Media Pack ?????2013/10/30 ???????????????? Oracle JRE/JDK 7 Update 45 ???????????????????? ???????????????Apple Mac OS X (Intel) (64-bit), Linux x86, Linux x86-64, Microsoft Windows (32-bit), Microsoft Windows x64, Oracle Solaris on SPARC (32-bit), Oracle Solaris on SPARC (64-bit), Oracle Solaris on x86 (32-bit), Oracle Solaris on x86-64 (64-bit) ??? ?????

    Read the article

  • OBIEE 11.1.1.5.0BP4 Bundle Patch now Available

    - by THE
    Oracle Business Intelligence Enterprise Edition - OBIEE 11.1.1.5.4 (a.k.a 11.1.1.5.0BP4) bundle patch is now available for download from  My Oracle Support . 11.1.1.5.4 bundle patch includes 40 bug fixes.11.1.1.5.4 bundle patch is cumulative, so it includes everything in 11.1.1.5.0PS1, 11.1.1.5.0BP2 and 11.1.1.5.0BP3. Please note that this release is only recommended for BI customers.Bundled Patch DetailsBug# 14517379 (tracking bug for obiee bundle patch 11.1.1.5.4)This patch is available for the supported platforms : Microsoft Windows (32-bit) Linux x86 (32-bit) Microsoft Windows x64 (64-bit) Linux x86-64 (64-bit) Oracle Solaris on SPARC (64-bit) Oracle Solaris on x86-64 (64-bit) IBM AIX PPC (64-bit) HPUX- IA (64-bit)

    Read the article

  • Is this a valid implementation of the repository pattern?

    - by user1578653
    I've been reading up about the repository pattern, with a view to implementing it in my own application. Almost all examples I've found on the internet use some kind of existing framework rather than showing how to implement it 'from scratch'. Here's my first thoughts of how I might implement it - I was wondering if anyone could advise me on whether this is correct? I have two tables, named CONTAINERS and BITS. Each CONTAINER can contain any number of BITs. I represent them as two classes: class Container{ private $bits; private $id; //...and a property for each column in the table... public function __construct(){ $this->bits = array(); } public function addBit($bit){ $this->bits[] = $bit; } //...getters and setters... } class Bit{ //some properties, methods etc... } Each class will have a property for each column in its respective table. I then have a couple of 'repositories' which handle things to do with saving/retrieving these objects from the database: //repository to control saving/retrieving Containers from the database class ContainerRepository{ //inject the bit repository for use later public function __construct($bitRepo){ $this->bitRepo = $bitRepo; } public function getById($id){ //talk directly to Oracle here to all column data into the object //get all the bits in the container $bits = $this->bitRepo->getByContainerId($id); foreach($bits as $bit){ $container->addBit($bit); } //return an instance of Container } public function persist($container){ //talk directly to Oracle here to save it to the database //if its ID is NULL, create a new container in database, otherwise update the existing one //use BitRepository to save each of the Bits inside the Container $bitRepo = $this->bitRepo; foreach($container->bits as $bit){ $bitRepo->persist($bit); } } } //repository to control saving/retrieving Bits from the database class BitRepository{ public function getById($id){} public function getByContainerId($containerId){} public function persist($bit){} } Therefore, the code I would use to get an instance of Container from the database would be: $bitRepo = new BitRepository(); $containerRepo = new ContainerRepository($bitRepo); $container = $containerRepo->getById($id); Or to create a new one and save to the database: $bitRepo = new BitRepository(); $containerRepo = new ContainerRepository($bitRepo); $container = new Container(); $container->setSomeProperty(1); $bit = new Bit(); $container->addBit($bit); $containerRepo->persist($container); Can someone advise me as to whether I have implemented this pattern correctly? Thanks!

    Read the article

  • What can I do to lower bandwidth cost on a bandwidth heavy site?

    - by acidzombie24
    The easiest answer is CDN but I'd like to ask. A friend of mine has a server that is used for mirror downloads. He says he is doing about 10TB of bandwidth a month which shocked me (I wonder if he is lying). I seen his site and he has no ads. I suspect he might close his website once he gets the bill. Anyways I was wondering since his CPU/RAM is not being used and his HD usage is around 15gb what he can do to lower cost if he continues this site. I said put up ads but I don't know if ads would cover it I found one CDN which offers $0.070 / GB. 10240gb (10TB) * .07 = $717 a month. That seems a little steep but he is using lots of traffic due to it being a mirror site. Also using a CDN doesnt make sense as he doesn't need multiple servers hosting the files in different areas (which is one reason he isn't using that now). He just needs a big upload pipe Is there something he can do? At the moment he is paying $200 a month on a dedicated server and he is using WAY more bandwidth then he should be using. Side question: Can gz-ing files large already compressed files help? like on (zip, rars, etc)

    Read the article

  • What benefit do I get from using a 64-bit server?

    - by blockhead
    I bought a small 256MB slice from slicehost and installed Ubuntu 10.04 64bit and wordpress on it. Performance was dismal as apache was eating up all my memory. Once I did some taming of apache and switched to fCGI things ran fine. Next I rebuilt as a 32 bit server, and performance was much better. What benefit would I get from a 64 bit server. Is it all about the memory?

    Read the article

  • How to tell if Microsoft Works is 32 or 64 bit? Please Help!

    - by Bill Campbell
    Hi, I am trying to convert one of our apps to run on Win7 64 bit from XP 32 bit. One of the things that it uses is Excel to import files. It's a little complicated since it was using Microsoft.Jet.OLEDB.4.0 (Excel). I found Office 14 (2010) has a 64bit version I can download. I downloaded Office 2010 Beta but it didn't seem to install Microsoft.ACE.OLEDB.14.0. I found that I could download 2010 Office System Driver Beta: Data Connectivity Components which has the ACE.OLEDB.14 in it but when I try to install it, the installed tells me "You cannot install the 64-bit version of Access Database engine for Microsoft Office 2010 because you currently have 32-bit Office products installed". How do I determine what 32bit office products this is reffering to? My Dell came with Microsoft Works installed. I don't know if this is 32 or 64 bit. Is there anyway to tell? I don't want to uninstall this if it's not the problem and I'm not sure what else might be the problem. Any help would be appreciated! thanks, Bill

    Read the article

  • Microsoft Silverlight cannot be used in browsers running in 64 bit mode.

    - by JD
    Hi, I have taken ownership of an application which is a .net win forms application which hosts a System.Windows.Forms.WebBrowser. When the application launches it makes a http request to load a xap file. I immediately see "install silverlight" icon and on clicking get: "Microsoft Silverlight cannot be used in browsers running in 64 bit mode". The app was written initially on a 32 bit machine although I have a 64 bit machine. Any ideas what I need to tweak to get this running? JD

    Read the article

  • ShowWindow not working from a DLL on a 64-bit OS?

    - by Auto Roast
    I have a process that calls SetWindowsHook to catch keyboard events. In the DLL that processes the events, I conditionally call ShowWindow on the handle of the window of the process who set the hook. That code works perfectly on a 32-bit OS (XP) and as a 32-bit application on a 64-bit OS, but when compiled to 64-bit, the window is not showing. The code to make the window visible is: if (idx == passlen) { HWND h = FindWindow(NULL,windowNameToShow); ShowWindow(h,SW_SHOW); idx = 0; logger->backerase(passlen - 1); nextCharToMatch = passPointer; }

    Read the article

  • What operating systems available for an 8-bit microprocessor?

    - by Benoit
    It does not need to be a full fledged OS, but at least have multitasking capabilities (i.e. a scheduler). Please mention what processor architecture it works on. This is a survey, so exact capabilities are not really important. Think of this as being a place to look at for possibilities when your next 8-bit embedded project comes up... I realize that most 8-bit micro do not require an OS, but just as a counter-example, Rabbit Semiconductor offers the RCM3710 processor module with 4 serial ports, a 10-BaseT ethernet port, 512K RAM and 512K Flash. All that for $39. All based around an 8-bit Z80 core. 8-bit does NOT necessarily mean extreme resource constraint.

    Read the article

  • Continuous Integration with 64-bit Sharepoint and TFS 2008?

    - by Hirvox
    I've set up a 64-bit TFS 2008 build server with Sharepoint, continuous integration and out-of-the-box MSTest. Unit tests for plain business logic classes run just fine and test results are published into TFS. However, any test that uses Sharepoint's API fails horribly, SPFarm.Local returning null and so on. Is there a way to fix this? The tests run fine in an otherwise identical 32-bit development environment (Windows Server 2008 under Hyper-V, Sharepoint patched up to June 2009 cumulative update) from both Visual Studio and command line, so the problem is not about improper use of SPContext.Current or any other part of the API that needs to be run in a web server context. I've ruled out permissions issues, because the build agent account can deploy the solution and create site collections just fine with stsadm. The next culprit could be that the unit tests were being run with a 32-bit process, which couldn't access the 64-bit Sharepoint API properly. I tried a workaround, but it has the side effect of disabling TFS support in MSTest. Do I have to wait for 2010 versions of MS tools (and hope for the best) or is there a third-party test framework available that runs natively in 64 bit and can publish test results into TFS 2008?

    Read the article

  • How does 64 bit code work on OS-X 10.5?

    - by philcolbourn
    I initially thought that 64 bit instructions would not work on OS-X 10.5. I wrote a little test program and compiled it with GCC -m64. I used long long for my 64 bit integers. The assembly instructions used look like they are 64 bit. eg. imultq and movq 8(%rbp),%rax. I seems to work. I am only using printf to display the 64 bit values using %lld. Is this the expected behaviour? Are there any gotcha's that would cause this to fail? Am I allowed to ask multiple questions in a question? Does this work on other OS's?

    Read the article

  • How to get a 64 bit dll with c source file, def file, link file by using command line in vc 6.0

    - by allan
    Hi, all My compile environment is windows xp and vc 6.0. Now I have a c source file(msgRout.c), def file(msgRout.def), link file(msgRout.link), then I use commands below to get a 32 bit dll: 1.cl /I ../include -c -W3 -Gs- -Z7 -Od -nologo -LD -D_X86_=1 -DWIN32 -D_WIN32 -D_MT -D_DLL msgRout.c 2.lib -out:msgRout.lib -def:msgRout.def -machine:i386 3.link /LIBPATH:../../Lib -nod -nologo -debug:full -dll @msgRout.link -out:msgRout.dll But the dll I got cannot be loaded on X64 application. it required a 64 bit dll. So here is my question: Can I get a 64 bit dll with vc 6.0? Using only above 3 commands alike, how can I get 64 bit dll? Many GREAT THANKS!!! Allan

    Read the article

  • how to call sql string from nhibernate

    - by frosty
    i have the following method, at the moment it's return the whole sql string. How would i execute the following. using (ITransaction transaction = session.BeginTransaction()) { string sql = string.Format( @"DECLARE @Cost money SET @Cost = -1 select @Cost = MAX(Cost) from item_costings where Item_ID = {0} and {1} >= Qty1 and {1} <= Qty2 RETURN (@Cost)", itemId, quantity); string mystring = session .CreateSQLQuery(sql) .ToString(); transaction.Commit(); return mystring; } // EDIT here is the final version using criteria using (ISession session = NHibernateHelper.OpenSession()) { decimal cost = session .CreateCriteria(typeof (ItemCosting)) .SetProjection(Projections.Max("Cost")) .Add(Restrictions.Eq("ItemId", itemId)) .Add(Restrictions.Le("Qty1", quantity)) .Add(Restrictions.Ge("Qty2", quantity)) .UniqueResult<decimal>(); return cost; }

    Read the article

  • Does IBM WebSphere MQ support a 64 bit client on windows?

    - by orpeles
    Title says it all :) No, seriously, I'm porting a C++ 32 bit application to 64 bit on windows. The application is a client of IBM WebSphere MQ. It uses the MQ client API. Now, as the port progresses, I'm trying to find a 64 bit client. So far, no luck. Does anyone here happen to know if where I can find one, or, god forbid, confirm that there isn't one? Regards, Or

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >