Search Results

Search found 780 results on 32 pages for 'trunk'.

Page 29/32 | < Previous Page | 25 26 27 28 29 30 31 32  | Next Page >

  • Manipulating gl_LightSource[gl_MaxLights]

    - by pion
    The The OpenGL ® Shading Language Spec version 1.20 http://www.opengl.org/registry/doc/GLSLangSpec.Full.1.20.8.pdf shows the following: struct gl_LightSourceParameters { vec4 ambient; // Acli vec4 diffuse; // Dcli vec4 specular; // Scli vec4 position; // Ppli vec4 halfVector; // Derived: Hi vec3 spotDirection; // Sdli float spotExponent; // Srli float spotCutoff; // Crli // (range: [0.0,90.0], 180.0) float spotCosCutoff; // Derived: cos(Crli) // (range: [1.0,0.0],-1.0) float constantAttenuation; // K0 float linearAttenuation; // K1 float quadraticAttenuation;// K2 }; uniform gl_LightSourceParameters gl_LightSource[gl_MaxLights]; Also, http://www.lighthouse3d.com/opengl/glsl/index.php?ogldir1 shows the following code snippets: lightDir = normalize(vec3(gl_LightSource[0].position)); https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.10 shows many uniform* functions but nothing seems to deal with an array of uniform variable like gl_LightSource[0]. How do we set the gl_LightSource[0] fields in WebGL using JavaScript? For example, gl_LightSource[0].position Thanks in advance for your help. Note: Cross post on http://www.khronos.org/message_boards/viewtopic.php?f=43&t=3373

    Read the article

  • Cutting Edge versus Just Average? Your SOA, Got BPM? by Mala Ramakrishnan

    - by JuergenKress
    Service Oriented Architecture (SOA) has completely transformed IT from the time it was introduced well over a decade ago. Organizations have been re-plumbing their infrastructure for reusability, efficiency and gain and succeeding with it. Best practices have emerged and people and technology have matured. We have got better at delivering on a stable platform on mission critical applications and services. Yet, there is this one secret that sets some SOA customers apart from the others. These companies grow and revolutionize their business and not just transform their IT infrastructure. The differences seem subtle for an untrained eye examining these organizations externally. And from within the company, it’s a bit like an ant sitting on an elephant, hard to differentiate between the IT trunk and business tail. What is it that some organizations do differently that makes them succeed beyond SOA? These organizations pull in business people more and more to weigh into their IT decisions. They wrench understanding process over services. They don’t settle easily when bridging business metrics and IT performance. They anguish over business requirements not translating seamlessly and quickly into IT. IT is not just an enabler but a pillar that revolutionizes their business. Okay, I’ll give it to you. These organizations layer Business Process Management (BPM) on top of their SOA. Think about lifeblood business processes in your own organizations. If you are Fedex, this would be shipping and handling. If you are Stanford Hospital, this would be patient case-management: from on-boarding through discharge and follow-up care. If you are Wells Fargo, this would be loan origination. Now think about how your SOA ties into your business process. Can you decouple your business processes from your SOA so that the two can transform and change independent of each other? Can you forecast success metrics for your business process, make the changes across the board and then look back over different periods of time to see if you are on track? Are your critical business processes entrenched in the minds of few experts in your organization or does everyone from the receptionist to your enterprise architect to your CEO understand what they can do to revolutionize it? Business Process Management is a superset of SOA. It is the process of getting your business to articulate business value and metrics and have it implemented in IT without any loss in translation. It is the act of extracting the business process from the minds of experts and IT applications in your organization and valuing them as assets for performance and gain. BPM is stepping outside your SOA and moving your organization to the next level of innovation. Oracle is accelerating BPM across industries with the latest launch. Join us to understand how BPM can give your organization a cutting edge over your SOA. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Creating a branch for every Sprint

    - by Martin Hinshelwood
    There are a lot of developers using version control these days, but a feature of version control called branching is very poorly understood and remains unused by most developers in favour of Labels. Most developers think that branching is hard and complicated. Its not! What is hard and complicated is a bad branching strategy. Just like a bad software architecture a bad branch architecture, or one that is not adhered to can prove fatal to a project. We I was at Aggreko we had a fairly successful Feature branching strategy (although the developers hated it) that meant that we could have multiple feature teams working at the same time without impacting each other. Now, this had to be carefully orchestrated as it was a Business Intelligence team and many of the BI artefacts do not lend themselves to merging. Today at SSW I am working on a Scrum team delivering a product that will be used by many hundreds of developers. SSW SQL Deploy takes much of the pain out of upgrading production databases when you are not using the Database projects in Visual Studio. With Scrum each Scrum Team works for a fixed period of time on a single sprint. You can have one or more Scrum Teams involved in delivering a product, but all the work must be merged and tested, ready to be shown to the Product Owner at the the Sprint Review meeting at the end of the current Sprint. So, what does this mean for a branching strategy? We have been using a “Main” (sometimes called “Trunk”) line and doing a branch for each sprint. It’s like Feature Branching, but with only ONE feature in operation at any one time, so no conflicts Figure: DEV folder containing the Development branches.   I know that some folks advocate applying a Label at the start of each Sprint and then rolling back if you need to, but I have always preferred the security of a branch. Like: being able to create a release from Main that has Sprint3 code even while Sprint4 is being worked on. being sure I can always create a stable build on request. Being able to guarantee a version (labels are not auditable) Be able to abandon the sprint without having to delete the code (rare I know, but would be a mess if it happened) Being able to see the flow of change sets through to a safe release It helps you find invalid dependencies when merging to Main as there may be some file that is in everyone’s Sprint branch, but never got checked in. (We had this at the merge of Sprint2) If you are always operating in this way as a standard it makes it easier to then add more scrum teams in the future. Muscle memory of this way of working. Don’t Like: Additional DB space for the branches Baseless merging between sprint branches when changes are directly ported Note: I do not think we will ever attempt this! Maybe a bit tougher to see the history between sprint branches since the changes go up through Main and down to another sprint branch Note: What you would have to do is see which Sprint the changes were made in and then check the history he same file in that Sprint. A little bit of added complexity that you would have to do anyway with multiple teams. Over time, you can end up with a lot of old unused sprint branches. Perhaps destroy with /keephistory can help in this case. Note: We ALWAYS delete the Sprint branch after it has been merged into Main. That is the theory anyway, and as you can see from the images Sprint2 has already been deleted. Why take the chance of having a problem rolling back or wanting to keep some of the code, when you can just abandon a branch and start a new one? It just seems easier and less painful to use a branch to me! What do you think?   Technorati Tags: TFS,TFS2010,Software Development,ALM,Branching

    Read the article

  • problem on my site [closed]

    - by ali
    Errors found while checking this document as XHTML How and from where repaired ,please help my hello, I have a problem on my site, was discovered by the program (A1 website analyzer) you can fix my site? Because I really do not know about the programming language, if it please email me and thank you please see this link it http://jigsaw.w3.org/css-validator/validator?uri=http://www.moneybillion.com/#errors Validation Output: 19 Errors Line 193, Column 261: there is no attribute "data-count" …ass="twitter-share-button" data-count="horizontal" data-via="elibeto1199"Twee…? You have used the attribute named above in your document, but the document type you are using does not support that attribute for this element. This error is often caused by incorrect use of the "Strict" document type with a document that uses frames (e.g. you must use the "Transitional" document type to get the "target" attribute), or by using vendor proprietary extensions such as "marginheight" (this is usually fixed by using CSS to achieve the desired effect instead). This error may also result if the element itself is not supported in the document type you are using, as an undefined element will have no supported attributes; in this case, see the element-undefined error message for further information. How to fix: check the spelling and case of the element and attribute, (Remember XHTML is all lower-case) and/or check that they are both allowed in the chosen document type, and/or use CSS instead of this attribute. If you received this error when using the element to incorporate flash media in a Web page, see the FAQ item on valid flash. Line 193, Column 283: there is no attribute "data-via" …ton" data-count="horizontal" data-via="elibeto1199"Tweet This error may also result if the element itself is not supported in the document type you are using, as an undefined element will have no supported attributes; in this case, see the element-undefined error message for further information. How to fix: check the spelling and case of the element and attribute, (Remember XHTML is all lower-case) and/or check that they are both allowed in the chosen document type, and/or use CSS instead of this attribute. If you received this error when using the element to incorporate flash media in a Web page, see the FAQ item on valid flash. Line 193, Column 393: required attribute "type" not specified …ript" src="//platform.twitter.com/widgets.js"var facebook = {? The attribute given above is required for an element that you've used, but you have omitted it. For instance, in most HTML and XHTML document types the "type" attribute is required on the "script" element and the "alt" attribute is required for the "img" element. Typical values for type are type="text/css" for and type="text/javascript" for . Line 194, Column 23: element "data:post.url" undefined url : "",? You have used the element named above in your document, but the document type you are using does not define an element of that name. This error is often caused by: incorrect use of the "Strict" document type with a document that uses frames (e.g. you must use the "Frameset" document type to get the "" element), by using vendor proprietary extensions such as "" or "" (this is usually fixed by using CSS to achieve the desired effect instead). by using upper-case tags in XHTML (in XHTML attributes and elements must be all lower-case). Line 197, Column 62: required attribute "type" not specified src="http://orkut-share.googlecode.com/svn/trunk/facebook.js"?

    Read the article

  • `svn checkout` on the SVN server causes the repo to break with a 301 error

    - by Phillip Oldham
    We have an nginx server which proxies to a standard set-up of Apache+SVN. The nginx set-up is a very simple proxy: server { server_name svn.ourdomain.tld; location / { proxy_pass http://localhost:8080; } } Apache is set-up as follows: <Location /> DAV svn SVNParentPath /var/svn AuthType Basic AuthName "Authentication Required" AuthUserFile /var/svn/.auth Require valid-user </Location> ...which allows us to access repositories using something like http://svn.ourdomain.tld/repo. We've been running this set-up now for about 2 years without issue. Recently we've found that we need to check out one of the repositories onto the server itself, however whenever we do so it seems to break the repo. From that point on, it will only respond with a 301 Moved Permanently error. We've tried: svn co file:///path/to/repo svn co svn://localhost/repo svn co svn://svn.ourdomain.tld/repo svn co svn+ssh://localhost/repo svn co svn+ssh://svn.ourdomain.tld/repo svn co http://localhost/repo svn co http://svn.ourdomain.tld/repo Also tried bypassing nginx, and get the same error: svn co http://localhost:8080/repo svn co http://svn.ourdomain.tld:8080/repo Checking out from a different machine works as expected until we attempt to check out on the server, after that it refuses with the same 301 error. What is more confusing is that this repository server also hosts our HudsonCI server, which can pulls and builds our projects hourly. This leads us to suspect that it's the svn client which is causing an error in communication. Its also very confusing that removing then re-creating the repo using svnadmin doesn't reset the error - the repo is still unavailable even though it's "new"! Restarting apache and subversion (svnserve) has no effect on this, or the original error. Version information: OS: 64-bit CentOS 4.2, 2.6.27 kernel svn client: 1.4.2 (same for both server and remote clients) svn server: 1.4.2 httpd: 2.2.3 UPDATE: This also happens with svn export when run on the repo server. Ran from any other box/client, there isn't a problem. Here's the workflow, to help clarify the error: [~repo-server~]# svnadmin create {repo}; chown -Rf www:www {repo} [remote-client]# svn checkout http://svn.ourdomain.tld/repo [remote-client]# svn add file; svn ci -m '' [~repo-server~]# cd /var/www; svn export file:///path/to/repo/trunk ourproject [remote-client]# svn update fails with 301 error I can also confirm that the hostname of the box doesn't have an effect here, which is very odd: whether or not svn.ourdomain.tld is added to /etc/hosts it still breaks - we thought it could be an issue with localhost routing, but that doesn't seem to be the case. Are we missing something in the documentation which states you can't checkout a repo when the server is on the same box? How can we stop the repos becoming corrupt when we checkout locally?

    Read the article

  • Issues getting a Cisco WLC 5508 to find AIR-LAP1142N

    - by user95917
    hoping someone can help me with a problem here. I'm attempting to setup a test (loan from Cisco) wireless network. Here's what i've got/done: 5508 Controller - Service Port IP set to 10.74.5.2 /24. Management IP set to 10.74.6.2 /24 with a default gateway of 10.74.6.1. Virtual IP set to 1.1.1.1. Copper SFP in slot 7, CAT5 (known good) going from there to port 1/0/47 on the switch. Green lights on both ends. 2960-S Switch - Vlan1 - 10.74.6.1 /24. dhcp pool 10.74.6.0 /24, default router 10.74.6.1. excluded-address 10.74.6.1, 10.74.6.2. 1/0/4 on the switch is set to switchport mode access and no shut. 1/0/47 on the switch is setup to switchport mode trunk and no shut. 1/0/4 has a CAT5 (known good) cable going from there to the AP. When I do a sh cdp nei from the switch, i can see the AP and Controller listed. When i configure my PC's nic to 10.74.5.5, and plug a cable from my nic to the SP port on the controller i can get on the device via the gui. In there, the only errors/info that show up in the trap are: Link Up: Slot: 0 Port: 7 Controller time base status - Controller is out of sync with the central timebase. I've manually set the time but apparently that's not quite the problem (or at least not the entire problem). When i plug the AP in, i see on the switch console that it grants it power, it sees it connect...but the controller won't see it for some reason. From what i've read you shouldn't have to do anything to the AP as it's managed by the controller...but i'm not sure what setting I'm missing for it to work. The AP light on top is continually cycling green, red, yellow. When I first start it up, it blinks green for 20 or so seconds, then goes to solid green for another 20 seconds or so, then flashes blue, green, red for awhile...but always ends up goinn back to the standard, green, red, yellow. Does anyone see any obvious issues with my setup or have any suggestions as to why i might be having a problem? Thanks for your help!

    Read the article

  • How do I protect a low budget network from rogue DHCP servers?

    - by Kenned
    I am helping a friend manage a shared internet connection in an apartment buildling with 80 apartments - 8 stairways with 10 apartments in each. The network is laid out with the internet router at one end of the building, connected to a cheap non-managed 16 port switch in the first stairway where the first 10 apartments are also connected. One port is connected to another 16 port cheapo switch in the next stairway, where those 10 apartments are connected, and so forth. Sort of a daisy chain of switches, with 10 apartments as spokes on each "daisy". The building is a U-shape, approximately 50 x 50 meters, 20 meters high - so from the router to the farthest apartment it’s probably around 200 meters including up-and-down stairways. We have a fair bit of problems with people hooking up wifi-routers the wrong way, creating rogue DHCP servers which interrupt large groups of the users and we wish to solve this problem by making the network smarter (instead of doing a physical unplugging binary search). With my limited networking skills, I see two ways - DHCP-snooping or splitting the entire network into separate VLANS for each apartment. Separate VLANS gives each apartment their own private connection to the router, while DHCP snooping will still allow LAN gaming and file sharing. Will DHCP snooping work with this kind of network topology, or does that rely on the network being in a proper hub-and-spoke-configuration? I am not sure if there are different levels of DHCP snooping - say like expensive Cisco switches will do anything, but inexpensive ones like TP-Link, D-Link or Netgear will only do it in certain topologies? And will basic VLAN support be good enough for this topology? I guess even cheap managed switches can tag traffic from each port with it’s own VLAN tag, but when the next switch in the daisy chain receives the packet on it’s “downlink” port, wouldn’t it strip or replace the VLAN tag with it’s own trunk-tag (or whatever the name is for the backbone traffic). Money is tight, and I don’t think we can afford professional grade Cisco (I have been campaigning for this for years), so I’d love some advice on which solution has the best support on low-end network equipment and if there are some specific models that are recommended? For instance low-end HP switches or even budget brands like TP-Link, D-Link etc. If I have overlooked another way to solve this problem it is due to my lack of knowledge. :)

    Read the article

  • Lync 2010, Kamailio, & Trixbox 2.6.23 (Asterisk 1.4)

    - by slashp
    I'm having an issue trying to connect Lync 2010 phone calls with our trixbox PBX. I've gotten to the point where Kamailio seems to be functioning properly and acting as a bridge between TCP traffic (from Lync) & UDP traffic (to the trixbox, as Asterisk 1.4 does not support SIP over TCP). Our Lync box IP: 10.100.10.41 Our Kamailio box IP: 10.100.10.44 Our trixbox IP: 10.100.10.2 The issue I'm running into is as follows when enabling SIP debugging for the Kamailio box: <--- SIP read from 10.100.10.44:5060 ---> PRACK sip:TNECLTSLY01.contoso.com:5068;transport=Tcp;maddr=10.100.10.41 SIP/2.0 FROM: <sip:9121;[email protected];user=phone>;epid=CF2380792B;tag=4852bab430 TO: <sip:[email protected];user=phone>;epid=CF2380792B;tag=3684a6a24e CSEQ: 24 PRACK CALL-ID: 192daae6-00e1-4140-bddd-0394b35d475b MAX-FORWARDS: 70 Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d VIA: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK159fc989 CONTACT: <sip:TNECLTSLY01.contoso.com:5068;transport=Tcp;maddr=10.100.10.41> CONTENT-LENGTH: 0 USER-AGENT: RTCC/4.0.0.0 MediationServer RAck: 1 23 INVITE <-------------> --- (12 headers 0 lines) --- Sending to 10.100.10.44 : 5060 (NAT) <--- Transmitting (NAT) to 10.100.10.44:5060 ---> SIP/2.0 481 Call leg/transaction does not exist Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d;received=10.100.10.44 Via: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK159fc989 From: <sip:9121;[email protected];user=phone>;epid=CF2380792B;tag=4852bab430 To: <sip:[email protected];user=phone>;epid=CF2380792B;tag=3684a6a24e Call-ID: 192daae6-00e1-4140-bddd-0394b35d475b CSeq: 24 PRACK User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 <------------> trixbox1*CLI> <--- SIP read from 10.100.10.44:5060 ---> ACK sip:[email protected];user=phone SIP/2.0 FROM: "John Jones"<sip:9121;[email protected];user=phone>;tag=4852bab430;epid=CF2380792B TO: <sip:[email protected];user=phone>;tag=3684a6a24e;epid=CF2380792B CSEQ: 23 ACK CALL-ID: 192daae6-00e1-4140-bddd-0394b35d475b MAX-FORWARDS: 70 Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d VIA: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK79a21c CONTENT-LENGTH: 0 My SIP trunk on the trixbox looks like this: [from-lync] exten => _+4XXX!,1,Noop(Stripping + from start of number) exten => _+4XXX!,n,Goto(from-internal,${EXTEN:1}) Though I am still having no luck getting the + stripped or the call to go through. Any ideas would be greatly appreciated. Thank you! -slashp

    Read the article

  • Latency issues over internet

    - by Stevo
    I have a Media Temple server running http://www.popsapp.com which I am having latency issues with. If I run ab -n 100 -c 10 http://www.popsapp.com/ from my local machine I get very bad stats e.g.: Connection Times (ms) min mean[+/-sd] median max Connect: 179 3375 2185.4 2837 12525 Processing: 0 505 693.3 229 4564 Waiting: 0 50 115.4 0 415 Total: 964 3880 2094.5 3159 12608 Whereas if I run it from a rackspace server I have I get this: Connection Times (ms) min mean[+/-sd] median max Connect: 75 76 3.3 75 84 Processing: 235 339 81.4 315 579 Waiting: 159 249 61.7 234 411 Total: 311 415 82.0 390 663 To me this looks like intermediate network issues, but I wouldn't have thought it could be this bad! Any ideas how I can improve it? Here's the trace route traceroute to www.popsapp.com (216.70.105.183), 64 hops max, 52 byte packets 1 192.168.2.1 (192.168.2.1) 3.738 ms 0.953 ms 1.418 ms 2 host-92-22-112-1.as13285.net (92.22.112.1) 27.409 ms 97.093 ms 78.858 ms 3 host-78-151-225-141.static.as13285.net (78.151.225.141) 61.830 ms 170.484 ms 113.288 ms 4 host-78-151-225-80.static.as13285.net (78.151.225.80) 101.513 ms host-78-151-225-22.static.as13285.net (78.151.225.22) 64.718 ms 47.309 ms 5 xe-11-1-0-rt001.sov.as13285.net (62.24.240.14) 98.381 ms 114.424 ms xe-11-1-0-rt001.the.as13285.net (62.24.240.6) 96.592 ms 6 host-78-144-1-59.as13285.net (78.144.1.59) 36.799 ms host-78-144-1-63.as13285.net (78.144.1.63) 178.426 ms host-78-144-1-61.as13285.net (78.144.1.61) 85.516 ms 7 xe-10-0-0-scr010.thn.as13285.net (78.144.0.224) 88.158 ms host-78-144-0-207.as13285.net (78.144.0.207) 35.132 ms host-78-144-0-153.as13285.net (78.144.0.153) 121.464 ms 8 limelight-pp-thn.as13285.net (78.144.3.6) 46.987 ms limelight-pp-sov.as13285.net (78.144.5.18) 108.025 ms 40.169 ms 9 tge11-1.fr4.lga.llnw.net (69.28.172.149) 109.603 ms ve6.fr4.lon.llnw.net (68.142.88.221) 121.681 ms 38.609 ms 10 tge11-1.fr4.lga.llnw.net (69.28.172.149) 111.981 ms 113.744 ms 111.711 ms 11 tge8-2.fr4.iad.llnw.net (69.28.189.34) 117.102 ms ve5.fr4.iad.llnw.net (69.28.171.214) 184.372 ms 146.178 ms 12 cr02-1-1.iad1.net2ez.com (65.97.48.254) 182.880 ms net2ez.tge2-2.fr4.iad.llnw.net (69.28.156.170) 150.489 ms 121.862 ms 13 65.97.50.26 (65.97.50.26) 184.620 ms cr02-1-1.iad1.net2ez.com (65.97.48.254) 156.136 ms 131.963 ms 14 65.97.50.26 (65.97.50.26) 124.899 ms 126.537 ms 123.322 ms 15 e1.4.as02.iad01.mtsvc.net (70.32.64.246) 134.647 ms 186.307 ms 211.059 ms 16 popsapp.com (216.70.105.183) 118.876 ms 113.189 ms vzx258.mediatemple.net (216.70.104.17) 131.012 ms Looks to me like there is significant delay across the limelight network. This would explain why the traceroute via my rackspace server doesn't suffer from the same delay as they will be using their own trunk.

    Read the article

  • How to generate build no with SVN revision no & Maven buildNumber plugin.

    - by Binit jha
    Hi, I am using mvn buildNumber plugin to generate build no with latest svn revision no. But, our version is not resolve to ${buildNumber} in the duration of installing in .m2 local reposotry. here is the our pom details: <modelVersion>4.0.0</modelVersion> <groupId>com.hp.cloudprint</groupId> <artifactId>testutils</artifactId> <name>testutils</name> <version>6.3.rel.${buildNumber}</version> <description>This jar contains some helper classes which can simplify the writing of JUnit test cases.</description> <dependencies> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>buildnumber-maven-plugin</artifactId> <executions> <execution> <id>useLastCommittedRevision</id> <phase>validate</phase> <goals> <goal>create</goal> </goals> </execution> </executions> <configuration> <doCheck>false</doCheck> <doUpdate>true</doUpdate> <getRevisionOnlyOnce>true</getRevisionOnlyOnce> </configuration> </plugin> <scm> <connection>scm:svn:https://acn-platform</connection> <developerConnection>scm:svn:https://abc-platform/trunk</developerConnection> </scm>. </project> Building jar: C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar [INFO] [install:install] [INFO] Installing C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar to C:\Documents and Settings\jhab.m2*\repository\com\hp\cloudprint\testutils\6.3.rel.${buildNumber}\testutils-6.3.rel.${buildNumber}.jar** [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL Target generated correct jar. testutil-6.3.rel.2297.jar* Thanks in advance Binit

    Read the article

  • How to generate build no with SVN revision no & Maven buildNumber plugin

    - by Binit jha
    I am using mvn buildNumber plugin to generate build no with latest svn revision no. But, our version is not resolve to ${buildNumber} in the duration of installing in .m2 local reposotry. here is the our pom details: <modelVersion>4.0.0</modelVersion> <groupId>com.hp.cloudprint</groupId> <artifactId>testutils</artifactId> <name>testutils</name> <version>6.3.rel.${buildNumber}</version> <description>This jar contains some helper classes which can simplify the writing of JUnit test cases.</description> <dependencies> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>buildnumber-maven-plugin</artifactId> <executions> <execution> <id>useLastCommittedRevision</id> <phase>validate</phase> <goals> <goal>create</goal> </goals> </execution> </executions> <configuration> <doCheck>false</doCheck> <doUpdate>true</doUpdate> <getRevisionOnlyOnce>true</getRevisionOnlyOnce> </configuration> </plugin> <scm> <connection>scm:svn:https://acn-platform</connection> <developerConnection>scm:svn:https://abc-platform/trunk</developerConnection> </scm>. </project> Building jar: C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar [INFO] [install:install] [INFO] Installing C:\Documents and Settings\hpadmin\workspace\testutils\target\testutils-6.3.rel.2930.jar to C:\Documents and Settings\jhab.m2*\repository\com\hp\cloudprint\testutils\6.3.rel.${buildNumber}\testutils-6.3.rel.${buildNumber}.jar** [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL Target generated correct jar. testutil-6.3.rel.2297.jar* Thanks in advance Binit

    Read the article

  • Nginx fastcgi problems with django (double slashes in url?)

    - by wizard
    I'm deploying my first django app. I'm familiar with nginx and fastcgi from deploying php-fpm. I can't get python to recognize the urls. I'm also at a loss on how to debug this further. I'd welcome solutions to this problem and tips on debugging fastcgi problems. Currently I get a 404 page regardless of the url and for some reason a double slash For http://www.site.com/admin/ Page not found (404) Request Method: GET Request URL: http://www.site.com/admin// My urls.py from the debug output - which work in the dev server. Using the URLconf defined in ahrlty.urls, Django tried these URL patterns, in this order: ^listings/ ^admin/ ^accounts/login/$ ^accounts/logout/$ my nginx config server { listen 80; server_name beta.ahrlty.com; access_log /home/ahrlty/ahrlty/logs/access.log; error_log /home/ahrlty/ahrlty/logs/error.log; location /static/ { alias /home/ahrlty/ahrlty/ahrlty/static/; break; } location /media/ { alias /usr/lib/python2.6/dist-packages/django/contrib/admin/media/; break; } location / { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:8001; break; } } and my fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_INFO $fastcgi_script_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; And lastly I'm running fastcgi from the commandline with django's manage.py. python manage.py runfcgi method=threaded host=127.0.0.1 port=8080 pidfile=mysite.pid minspare=4 maxspare=30 daemonize=false I'm having a hard time debugging this one. Does anything jump out at anybody? Notes nginx version: nginx/0.7.62 Django svn trunk rev 13013

    Read the article

  • Python script is exiting with no output and I have no idea why

    - by Adam Tuttle
    I'm attempting to debug a Subversion post-commit hook that calls some python scripts. What I've been able to determine so far is that when I run post-commit.bat manually (I've created a wrapper for it to make it easier) everything succeeds, but when SVN runs it one particular step doesn't work. We're using CollabNet SVNServe, which I know from the documentation removes all environment variables. This had caused some problems earlier, but shouldn't be an issue now. Before Subversion calls a hook script, it removes all variables - including $PATH on Unix, and %PATH% on Windows - from the environment. Therefore, your script can only run another program if you spell out that program's absolute name. The relevant portion of post-commit.bat is: echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log set SITENAME=staging set SVNPATH=branches/staging/wwwroot/ "C:\Python3\python.exe" C:\svn-repos\company\hooks\svn2ftp.py ^ --svnUser="svnusername" ^ --svnPass="svnpassword" ^ --ftp-user=ftpuser ^ --ftp-password=ftppassword ^ --ftp-remote-dir=/ ^ --access-url=svn://10.0.100.6/company ^ --status-file="C:\svn-repos\company\hooks\svn2ftp-%SITENAME%.dat" ^ --project-directory=%SVNPATH% "staging.company.com" %1 %2 >> c:\svn-repos\company\hooks\svn2ftp.out.log echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log When I run post-commit.bat manually, for example: post-commit c:\svn-repos\company 12345, I see output like the following in svn2ftp.out.log: -------------------------- args1: c:\svn-repos\company args0: staging.company.com abspath: c:\svn-repos\company project_dir: branches/staging/wwwroot/ local_repos_path: c:\svn-repos\company getting youngest revision... done, up-to-date -------------------------- However, when I commit something to the repo and it runs automatically, the output is: -------------------------- -------------------------- svn2ftp.py is a bit long, so I apologize but here goes. I'll have some notes/disclaimers about its contents below it. #!/usr/bin/env python """Usage: svn2ftp.py [OPTION...] FTP-HOST REPOS-PATH Upload to FTP-HOST changes committed to the Subversion repository at REPOS-PATH. Uses svn diff --summarize to only propagate the changed files Options: -?, --help Show this help message. -u, --ftp-user=USER The username for the FTP server. Default: 'anonymous' -p, --ftp-password=P The password for the FTP server. Default: '@' -P, --ftp-port=X Port number for the FTP server. Default: 21 -r, --ftp-remote-dir=DIR The remote directory that is expected to resemble the repository project directory -a, --access-url=URL This is the URL that should be used when trying to SVN export files so that they can be uploaded to the FTP server -s, --status-file=PATH Required. This script needs to store the last successful revision that was transferred to the server. PATH is the location of this file. -d, --project-directory=DIR If the project you are interested in sending to the FTP server is not under the root of the repository (/), set this parameter. Example: -d 'project1/trunk/' This should NOT start with a '/'. 2008.5.2 CKS Fixed possible Windows-related bug with tempfile, where the script didn't have permission to write to the tempfile. Replaced this with a open()-created file created in the CWD. 2008.5.13 CKS Added error logging. Added exception for file-not-found errors when deleting files. 2008.5.14 CKS Change file open to 'rb' mode, to prevent Python's universal newline support from stripping CR characters, causing later comparisons between FTP and SVN to report changes. """ try: import sys, os import logging logging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename='svn2ftp.debug.log', filemode='a' ) console = logging.StreamHandler() console.setLevel(logging.ERROR) logging.getLogger('').addHandler(console) import getopt, tempfile, smtplib, traceback, subprocess from io import StringIO import pysvn import ftplib import inspect except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) print(stack_trace) #end capture sys.exit(1) #defaults host = "" user = "anonymous" password = "@" port = 21 repo_path = "" local_repos_path = "" status_file = "" project_directory = "" remote_base_directory = "" toAddrs = "[email protected]" youngest_revision = "" def email(toAddrs, message, subject, fromAddr='[email protected]'): headers = "From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n" % (fromAddr, toAddrs, subject) message = headers + message logging.info('sending email to %s...' % toAddrs) server = smtplib.SMTP('smtp.company.com') server.set_debuglevel(1) server.sendmail(fromAddr, toAddrs, message) server.quit() logging.info('email sent') def captureErrorMessage(e): sout = StringIO() traceback.print_exc(file=sout) errorMessage = '\n'+('*'*80)+('\n%s'%e)+('\n%s\n'%sout.getvalue())+('*'*80) return errorMessage def usage_and_exit(errmsg): """Print a usage message, plus an ERRMSG (if provided), then exit. If ERRMSG is provided, the usage message is printed to stderr and the script exits with a non-zero error code. Otherwise, the usage message goes to stdout, and the script exits with a zero errorcode.""" if errmsg is None: stream = sys.stdout else: stream = sys.stderr print(__doc__, file=stream) if errmsg: print("\nError: %s" % (errmsg), file=stream) sys.exit(2) sys.exit(0) def read_args(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision try: opts, args = getopt.gnu_getopt(sys.argv[1:], "?u:p:P:r:a:s:d:SU:SP:", ["help", "ftp-user=", "ftp-password=", "ftp-port=", "ftp-remote-dir=", "access-url=", "status-file=", "project-directory=", "svnUser=", "svnPass=" ]) except getopt.GetoptError as msg: usage_and_exit(msg) for opt, arg in opts: if opt in ("-?", "--help"): usage_and_exit() elif opt in ("-u", "--ftp-user"): user = arg elif opt in ("-p", "--ftp-password"): password = arg elif opt in ("-SU", "--svnUser"): svnUser = arg elif opt in ("-SP", "--svnPass"): svnPass = arg elif opt in ("-P", "--ftp-port"): try: port = int(arg) except ValueError as msg: usage_and_exit("Invalid value '%s' for --ftp-port." % (arg)) if port < 1 or port > 65535: usage_and_exit("Value for --ftp-port must be a positive integer less than 65536.") elif opt in ("-r", "--ftp-remote-dir"): remote_base_directory = arg elif opt in ("-a", "--access-url"): repo_path = arg elif opt in ("-s", "--status-file"): status_file = os.path.abspath(arg) elif opt in ("-d", "--project-directory"): project_directory = arg if len(args) != 3: print(str(args)) usage_and_exit("host and/or local_repos_path not specified (" + len(args) + ")") host = args[0] print("args1: " + args[1]) print("args0: " + args[0]) print("abspath: " + os.path.abspath(args[1])) local_repos_path = os.path.abspath(args[1]) print('project_dir:',project_directory) youngest_revision = int(args[2]) if status_file == "" : usage_and_exit("No status file specified") def main(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision read_args() #repository,fs_ptr #get youngest revision print("local_repos_path: " + local_repos_path) print('getting youngest revision...') #youngest_revision = fs.youngest_rev(fs_ptr) assert youngest_revision, "Unable to lookup youngest revision." last_sent_revision = get_last_revision() if youngest_revision == last_sent_revision: # no need to continue. we should be up to date. print('done, up-to-date') return if last_sent_revision or youngest_revision < 10: # Only compare revisions if the DAT file contains a valid # revision number. Otherwise we risk waiting forever while # we parse and uploading every revision in the repo in the case # where a repository is retroactively configured to sync with ftp. pysvn_client = pysvn.Client() pysvn_client.callback_get_login = get_login rev1 = pysvn.Revision(pysvn.opt_revision_kind.number, last_sent_revision) rev2 = pysvn.Revision(pysvn.opt_revision_kind.number, youngest_revision) summary = pysvn_client.diff_summarize(repo_path, rev1, repo_path, rev2, True, False) print('summary len:',len(summary)) if len(summary) > 0 : print('connecting to %s...' % host) ftp = FTPClient(host, user, password) print('connected to %s' % host) ftp.base_path = remote_base_directory print('set remote base directory to %s' % remote_base_directory) #iterate through all the differences between revisions for change in summary : #determine whether the path of the change is relevant to the path that is being sent, and modify the path as appropriate. print('change path:',change.path) ftp_relative_path = apply_basedir(change.path) print('ftp rel path:',ftp_relative_path) #only try to sync path if the path is in our project_directory if ftp_relative_path != "" : is_file = (change.node_kind == pysvn.node_kind.file) if str(change.summarize_kind) == "delete" : print("deleting: " + ftp_relative_path) try: ftp.delete_path("/" + ftp_relative_path, is_file) except ftplib.error_perm as e: if 'cannot find the' in str(e) or 'not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise elif str(change.summarize_kind) == "added" or str(change.summarize_kind) == "modified" : local_file = "" if is_file : local_file = svn_export_temp(pysvn_client, repo_path, rev2, change.path) print("uploading file: " + ftp_relative_path) ftp.upload_path("/" + ftp_relative_path, is_file, local_file) if is_file : os.remove(local_file) elif str(change.summarize_kind) == "normal" : print("skipping 'normal' element: " + ftp_relative_path) else : raise str("Unknown change summarize kind: " + str(change.summarize_kind) + ", path: " + ftp_relative_path) ftp.close() #write back the last revision that was synced print("writing last revision: " + str(youngest_revision)) set_last_revision(youngest_revision) # todo: undo def get_login(a,b,c,d): #arguments don't matter, we're always going to return the same thing try: return True, "svnUsername", "svnPassword", True except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) #end capture sys.exit(1) #functions for persisting the last successfully synced revision def get_last_revision(): if os.path.isfile(status_file) : f=open(status_file, 'r') line = f.readline() f.close() try: i = int(line) except ValueError: i = 0 else: i = 0 f = open(status_file, 'w') f.write(str(i)) f.close() return i def set_last_revision(rev) : f = open(status_file, 'w') f.write(str(rev)) f.close() #augmented ftp client class that can work off a base directory class FTPClient(ftplib.FTP) : def __init__(self, host, username, password) : self.base_path = "" self.current_path = "" ftplib.FTP.__init__(self, host, username, password) def cwd(self, path) : debug_path = path if self.current_path == "" : self.current_path = self.pwd() print("pwd: " + self.current_path) if not os.path.isabs(path) : debug_path = self.base_path + "<" + path path = os.path.join(self.current_path, path) elif self.base_path != "" : debug_path = self.base_path + ">" + path.lstrip("/") path = os.path.join(self.base_path, path.lstrip("/")) path = os.path.normpath(path) #by this point the path should be absolute. if path != self.current_path : print("change from " + self.current_path + " to " + debug_path) ftplib.FTP.cwd(self, path) self.current_path = path else : print("staying put : " + self.current_path) def cd_or_create(self, path) : assert os.path.isabs(path), "absolute path expected (" + path + ")" try: self.cwd(path) except ftplib.error_perm as e: for folder in path.split('/'): if folder == "" : self.cwd("/") continue try: self.cwd(folder) except: print("mkd: (" + path + "):" + folder) self.mkd(folder) self.cwd(folder) def upload_path(self, path, is_file, local_path) : if is_file: (path, filename) = os.path.split(path) self.cd_or_create(path) # Use read-binary to avoid universal newline support from stripping CR characters. f = open(local_path, 'rb') self.storbinary("STOR " + filename, f) f.close() else: self.cd_or_create(path) def delete_path(self, path, is_file) : (path, filename) = os.path.split(path) print("trying to delete: " + path + ", " + filename) self.cwd(path) try: if is_file : self.delete(filename) else: self.delete_path_recursive(filename) except ftplib.error_perm as e: if 'The system cannot find the' in str(e) or '550 File not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise def delete_path_recursive(self, path): if path == "/" : raise "WARNING: trying to delete '/'!" for node in self.nlst(path) : if node == path : #it's a file. delete and return self.delete(path) return if node != "." and node != ".." : self.delete_path_recursive(os.path.join(path, node)) try: self.rmd(path) except ftplib.error_perm as msg : sys.stderr.write("Error deleting directory " + os.path.join(self.current_path, path) + " : " + str(msg)) # apply the project_directory setting def apply_basedir(path) : #remove any leading stuff (in this case, "trunk/") and decide whether file should be propagated if not path.startswith(project_directory) : return "" return path.replace(project_directory, "", 1) def svn_export_temp(pysvn_client, base_path, rev, path) : # Causes access denied error. Couldn't deduce Windows-perm issue. # It's possible Python isn't garbage-collecting the open file-handle in time for pysvn to re-open it. # Regardless, just generating a simple filename seems to work. #(fd, dest_path) = tempfile.mkstemp() dest_path = tmpName = '%s.tmp' % __file__ exportPath = os.path.join(base_path, path).replace('\\','/') print('exporting %s to %s' % (exportPath, dest_path)) pysvn_client.export( exportPath, dest_path, force=False, revision=rev, native_eol=None, ignore_externals=False, recurse=True, peg_revision=rev ) return dest_path if __name__ == "__main__": logging.info('svnftp.start') try: main() logging.info('svnftp.done') except Exception as e: # capture the location of the error for debug purposes frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace[:-1]) print(stack_trace) # end capture error_text = '\nFATAL EXCEPTION!!!\n'+captureErrorMessage(e) subject = "ALERT: SVN2FTP Error" message = """An Error occurred while trying to FTP an SVN commit. repo_path = %(repo_path)s\n local_repos_path = %(local_repos_path)s\n project_directory = %(project_directory)s\n remote_base_directory = %(remote_base_directory)s\n error_text = %(error_text)s """ % globals() email(toAddrs, message, subject) logging.error(e) Notes/Disclaimers: I have basically no python training so I'm learning as I go and spending lots of time reading docs to figure stuff out. The body of get_login is in a try block because I was getting strange errors saying there was an unhandled exception in callback_get_login. Never figured out why, but it seems fine now. Let sleeping dogs lie, right? The username and password for get_login are currently hard-coded (but correct) just to eliminate variables and try to change as little as possible at once. (I added the svnuser and svnpass arguments to the existing argument parsing.) So that's where I am. I can't figure out why on earth it's not printing anything into svn2ftp.out.log. If you're wondering, the output for one of these failed attempts in svn2ftp.debug.log is: 2012-09-06 15:18:12,496 INFO svnftp.start 2012-09-06 15:18:12,496 INFO svnftp.done And it's no different on a successful run. So there's nothing useful being logged. I'm lost. I've gone way down the rabbit hole on this one, and don't know where to go from here. Any ideas?

    Read the article

  • CodePlex Daily Summary for Sunday, March 07, 2010

    CodePlex Daily Summary for Sunday, March 07, 2010New ProjectsAlgorithminator: Universal .NET algorithm visualizer, which helps you to illustrate any algorithm, written in any .NET language. Still in development.ALToolkit: Contains a set of handy .NET components/classes. Currently it contains: * A Numeric Text Box (an Extended NumericUpDown) * A Splash Screen base fo...Automaton Home: Automaton is a home automation software built with a n-Tier, MVVM pattern utilzing WCF, EF, WPF, Silverlight and XBAP.Developer Controls: Developer Controls contains various controls to help build applications that can script/write code.Dynamic Reference Manager: Dynamic Reference Manager is a set (more like a small group) of classes and attributes written in C# that allows any .NET program to reference othe...indiologic: Utilities of an IndioNeural Cryptography in F#: This project is my magistracy resulting work. It is intended to be an example of using neural networks in cryptography. Hashing functions are chose...Particle Filter Visualization: Particle Filter Visualization Program for the Intel Science and Engineering FairPólya: Efficient, immutable, polymorphic collections. .Net lacks them, we provide them*. * By we, we mean I; and by efficient, I mean hopefully so.project euler solutions from mhinze: mhinze project euler solutionsSilverlight 4 and WCF multi layer: Silverlight 4 and WCF multi layersqwarea: Project for a browser-based, minimalistic, massively multiplayer strategy game. Part of the "Génie logiciel et Cloud Computing" course of the ENS (...SuperSocket: SuperSocket, a socket application framework can build FTP/SMTP/POP server easilyToast (for ASP.NET MVC): Dynamic, developer & designer friendly content injection, compression and optimization for ASP.NET MVCNew ReleasesALToolkit: ALToolkit 1.0: Binary release of the libraries containing: NumericTextBox SplashScreen Based on the VB.NET code, but that doesn't really matter.Blacklist of Providers: 1.0-Milestone 1: Blacklist of Providers.Milestone 1In this development release implemented - Main interface (Work Item #5453) - Database (Work Item #5523)C# Linear Hash Table: Linear Hash Table b2: Now includes a default constructor, and will throw an exception if capacity is not set to a power of 2 or loadToMaintain is below 1.Composure: CassiniDev-Trunk-40745-VS2010.rc1.NET4: A simple port of the CassiniDev portable web server project for Visual Studio 2010 RC1 built against .NET 4.0. The WCF tests currently fail unless...Developer Controls: DevControls: These are the version 1.0 releases of these controls. Download the individually or all together (in a .zip file). More releases coming soon!Dynamic Reference Manager: DRM Alpha1: This is the first release. I'm calling it Alpha because I intend implementing other functions, but I do not intend changing the way current functio...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.3: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...GKO Libraries: GKO Libraries 0.1 Alpha: 0.1 AlphaHome Access Plus+: v3.0.3.0: Version 3.0.3.0 Release Change Log: Added Announcement Box Removed script files that aren't needed Fixed & issue in directory path Stylesheet...Icarus Scene Engine: Icarus Scene Engine 1.10.306.840: Icarus Professional, Icarus Player, the supporting software for Icarus Scene Engine, with some included samples, and the start of a tutorial (with ...mavjuz WndLpt: wndlpt-0.2.5: New: Response to 5 LPT inputs "test i 1" New: Reaction to 12 LPT outputs "test q 8" New: Reaction to all LPT pins "test pin 15" New: Syntax: ...Neural Cryptography in F#: Neural Cryptography 0.0.1: The most simple version of this project. It has a neural network that works just like logical AND and a possibility to recreate neural network from...Password Provider: 1.0.3: This release fixes a bug which caused the program to crash when double clicking on a generic item.RoTwee: RoTwee 6.2.0.0: New feature is as next. 16649 Add hashtag for tweet of tune.Now you can tweet your playing tune with hashtag.Visual Studio DSite: Picture Viewer (Visual C++ 2008): This example source code allows you to view any picture you want in the click of a button. All you got to do is click the button and browser via th...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.8.00: Whats New File Browser: Folders & Files View reworked File Browser: Folders & Files View reworked File Browser: Folders are displayed as TreeVi...WSDLGenerator: WSDLGenerator 0.0.0.4: - replaced CommonLibrary.dll by CommandLineParser.dll - added better support for custom complex typesMost Popular ProjectsMetaSharpSilverlight ToolkitASP.NET Ajax LibraryAll-In-One Code FrameworkWindows 7 USB/DVD Download Toolニコ生アラートWindows Double ExplorerVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Caliburn: An Application Framework for WPF and SilverlightArkSwitchMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFasterflect - A Fast and Simple Reflection APIFluent Assertions

    Read the article

  • GlassFish Clustering with DCOM on Windows

    - by ByronNevins
    DCOM - Distributed COM, a Microsoft protocol for communicating with Windows machines. Why use DCOM? In GlassFish 3.1 SSH is used as the standard way to run commands on remote nodes for clustering.  It is very difficult for users to get SSH configured properly on Windows.  SSH does not come with Windows so we have to depend on third party tools.  And then the user is forced to install and configure these tools -- which can be tricky. DCOM is available on all supported platforms.  It is built-in to Windows. The idea is to use DCOM to communicate with remote Windows nodes.  This has the huge advantage that the user has to do minimal, if any, configuration on the Windows nodes. Implementation HighlightsTwo open Source Libraries have been added to GlassFish: Jcifs – a SAMBA implementation in Java J-interop – A Java implementation for making DCOM calls to remote Windows computers.   Note that any supported platform can use DCOM to work with Windows nodes -- not just Windows.E.g. you can have a Linux DAS work with Windows remote instances.All existing SSH commands now have a corresponding DCOM command – except for setup-ssh which isn’t needed for DCOM.  validate-dcom is an all new command. New DCOM Commands create-node-dcom delete-node-dcom install-node-dcom list-nodes-dcom ping-node-dcom uninstall-node-dcom update-node-dcom validate-dcom setup-local-dcom (This is only available via Update Center for GlassFish 3.1.2) These commands are in-place in the trunk (4.0).  And in the branch (3.1.2) Windows Configuration Challenges There are an infinite number of possible configurations of Windows if you look at it as a combination of main release, service-pack, special drivers, software, configuration etc.  Later versions of Windows err on the side of tightening security be default.  This means that the Windows host may need to have configuration changes made.These configuration changes mostly need to be made by the user.  setup-local-dcom will assist you in making required changes to the Windows Registry.  See the reference blogs for details. The validate-dcom Command validate-dcom is a crucial command.  It should be run before any other commands.  If it does not run successfully then there is no point in running other commands.The validate-dcom command must be used from a DAS machine to test a different Windows machine.  If  validate-dcom runs successfully you can be confident that all the DCOM commands will work.  Conversely, the opposite is also true:  If validate-dcom fails, then no DCOM commands will work. What validate-dcom does Verify that the remote host is not the local machine. Resolves the remote host name Checks that the remote DCOM port is being listened on (135, 139) Checks that the remote host’s File Sharing is enabled (port 445) It copies a file (a script) to the remote host to verify that SAMBA is working and authorization is correct It runs a script that it copied on-the-fly to the remote host. Tips and Tricks The bread and butter commands that use DCOM are existing commands like create-instance, start-instance etc.   All of the commands that have dcom in their name are for dealing with the actual nodes. The way the software works is to call asadmin.bat on the remote machine and run a command.  This means that you can track these commands easily on the remote machine with the usual tools.  E.g. using AS_LOGFILE, looking at log files, etc.  It’s easy to attach a debugger to the remote asadmin process, “just in time”, if necessary. How to debug the remote commands:Edit the asadmin.bat file that is in the glassfish/bin folder.  Use glassfish/lib/nadmin.bat in GlassFish 4.0+Add these options to the java call:-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1234  Now if you run, say start-instance on DAS, you can attach your debugger, at your leisure, to the remote machines port 1234.  It will be running start-local-instance and patiently waiting for you to attach.

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • Java EE and GlassFish Server Roadmap Update

    - by John Clingan
    2013 has been a stellar year for both the Java EE and GlassFish Server communities. On June 12, Oracle and its partners announced the release of Java EE 7, which delivers on three major themes – HTML5, developer productivity, and meeting enterprise demands. The online event attracted over 10,000 views in the first two days! During the online event, Oracle also announced the availability of GlassFish Server Open Source Edition 4, the world's first Java EE 7 compatible application server. The primary role of GlassFish Server Open Source Edition has been, and continues to be, driving adoption of the latest release of the Java Platform, Enterprise Edition. Oracle also announced the Java EE 7 SDK, which bundles GlassFish Server Open Source Edition 4, as a Java EE 7 learning aid. Last, Oracle publicly announced the Java EE 7 reference implementation based on GlassFish Server Open Source Edition 4. Java EE is a popular platform, as evidenced by the 20+ Java EE 6 compatible implementations available to choose from. After the launch of Java EE 7 and GlassFish Server Open Source Edition 4, we began planning the Java EE 8 roadmap, which was covered during the JavaOne Strategy Keynote. To summarize, there is a lot of interest in improving on HTML5 support, Cloud, and investigating NoSQL support. We received a lot of great feedback from the community and customers on what they would like to see in Java EE 8. As we approached JavaOne 2013, we started planning the GlassFish Server roadmap. What we announced at JavaOne was that GlassFish Server Open Source Edition 4.1 is scheduled for 2014. Here is an update to that roadmap. GlassFish Server Open Source Edition 4.1 is scheduled for 2014 We are planning updates as needed to GlassFish Server Open Source Edition, which is commercially unsupported As we head towards Java EE 8: The trunk will eventually transition to GlassFish Server Open Source Edition 5 as a Java EE 8 implementation The Java EE 8 Reference Implementation will be derived from GlassFish Server Open Source Edition 5. This replicates what has been done in past Java EE and GlassFish Server releases. Oracle will no longer release future major releases of Oracle GlassFish Server with commercial support – specifically Oracle GlassFish Server 4.x with commercial Java EE 7 support will not be released. Commercial Java EE 7 support will be provided from WebLogic Server. Oracle GlassFish Server will not be releasing a 4.x commercial version Expanding on that last bullet, new and existing Oracle GlassFish Server 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. Oracle recommends that existing commercial Oracle GlassFish Server customers begin planning to move to Oracle WebLogic Server, which is a natural technical and license migration path forward: Applications developed to Java EE standards can be deployed to both GlassFish Server and Oracle WebLogic Server GlassFish Server and Oracle WebLogic Server have implementation-specific deployment descriptor interoperability (here and here). GlassFish Server 3.x and Oracle WebLogic Server share quite a bit of code, so there are quite a bit of configuration and (extended) feature similarities. Shared code includes JPA, JAX-RS, WebSockets (pre JSR 356 in both cases), CDI, Bean Validation, JAX-WS, JAXB, and WS-AT. Both Oracle GlassFish Server 3.x and Oracle WebLogic Server 12c support Oracle Access Manager, Oracle Coherence, Oracle Directory Server, Oracle Virtual Directory, Oracle Database, Oracle Enterprise Manager and are entitled to support for the underlying Oracle JDK. To summarize, Oracle is committed to the future of Java EE.  Java EE 7 has been released and planning for Java EE 8 has begun. GlassFish Server Open Source Edition continues to be the strategic foundation for Java EE reference implementation going forward. And for developers, updates will be delivered as needed to continue to deliver a great developer experience for GlassFish Server Open Source Edition. We are planning for GlassFish Server Open Source Edition 5 as the foundation for the Java EE 8 reference implementation, as well as bundling GlassFish Server Open Source Edition 5 in a Java EE 8 SDK, which is the most popular distribution of GlassFish. This will allow GlassFish releases to be more focused on the Java EE platform and community-driven requirements. We continue to encourage community contributions, bug reports, participation on the GlassFish forum, etc. Going forward, Oracle WebLogic Server will be the single strategic commercially supported application server from Oracle. Disclaimer: The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract.It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • Atheros AR2413 wireless not working after shutdown

    - by Chandrasekhar
    I am using a Ubuntu 11.04 on an Acer aspire 3680 laptop and my wifi is not working. I followed the below commands to install the madwifi driver: sudo su apt-get install subversion cd /usr/src svn checkout http://madwifi-project.org/svn/madwifi/trunk madwifi tar cfvz madwifi.tgz cd madwifi make && make install echo "blacklist ath5k" /etc/modprobe.d/blacklist.conf echo "ath_pci" /etc/modules modprobe ath_pci sudo reboot After installation I am facing the same problem. My wifi wont work after I shutdown. Infact it didn't work after suspend but I rectified that problem by the following commands: Command 1: sudo rmmod -f ath_pci sudo rfkill unblock all sudo modprobe ath_pci along with the command SUSPEND_MODULES=ath_pci added to the /etc/pm/config.d/madwifi directory. So if I suspend and then on my laptop the wifi loads well and doesn't create a problem. But if I shutdown my laptop the wifi never loads again and eachtime I have to run a Ubuntu 9.04 live CD to load it. I did try adding the Command 1 to the /etc/rc.local directory but still it doesn't work. So my question is: What should I do in order to make my wireless work without having to run a live CD of ubuntu 9.04 everytime after shutdown? Thanks. Here are the outputs which one might need: Output 1 chandru@chandru-acer:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation N10/ICH 7 Family High Definition Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 2 (rev 02) 00:1c.2 PCI bridge: Intel Corporation N10/ICH 7 Family PCI Express Port 3 (rev 02) 00:1d.0 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 (rev 02) 00:1d.1 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 (rev 02) 00:1d.2 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 (rev 02) 00:1d.3 USB Controller: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 (rev 02) 00:1d.7 USB Controller: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev e2) 00:1f.0 ISA bridge: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801GBM/GHM (ICH7 Family) SATA IDE Controller (rev 02) 00:1f.3 SMBus: Intel Corporation N10/ICH 7 Family SMBus Controller (rev 02) 02:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8038 PCI-E Fast Ethernet Controller (rev 14) 0a:03.0 Ethernet controller: Atheros Communications Inc. AR2413 802.11bg NIC (rev 01) 0a:09.0 CardBus bridge: Texas Instruments PCIxx12 Cardbus Controller 0a:09.2 Mass storage controller: Texas Instruments 5-in-1 Multimedia Card Reader (SD/MMC/MS/MS PRO/xD) Output 2: lsmod Module Size Used by wlan_tkip 17074 2 binfmt_misc 13213 1 parport_pc 32111 0 ppdev 12849 0 snd_hda_codec_si3054 12924 1 snd_hda_codec_realtek 255882 1 joydev 17322 0 snd_atiixp_modem 18624 0 snd_via82xx_modem 18305 0 snd_intel8x0m 18493 0 snd_ac97_codec 105614 3 snd_atiixp_modem,snd_via82xx_modem,snd_intel8x0m snd_hda_intel 24113 2 ac97_bus 12642 1 snd_ac97_codec snd_hda_codec 90901 3 snd_hda_codec_si3054,snd_hda_codec_realtek,snd_hda_intel i915 451053 3 snd_hwdep 13274 1 snd_hda_codec snd_pcm 80042 7 snd_hda_codec_si3054,snd_atiixp_modem,snd_via82xx_modem,snd_intel8x0m,snd_ac97_codec,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 snd_rawmidi 25269 1 snd_seq_midi drm_kms_helper 40971 1 i915 snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51291 2 snd_seq_midi,snd_seq_midi_event pcmcia 39671 0 snd_timer 28659 2 snd_pcm,snd_seq snd_seq_device 14110 3 snd_seq_midi,snd_rawmidi,snd_seq drm 184164 4 i915,drm_kms_helper yenta_socket 27230 0 tifm_7xx1 12898 0 wlan_scan_sta 21945 1 ath_rate_sample 17279 1 pcmcia_rsrc 18292 1 yenta_socket psmouse 73312 0 tifm_core 15040 1 tifm_7xx1 snd 55295 18 snd_hda_codec_si3054,snd_hda_codec_realtek,snd_atiixp_modem,snd_via82xx_modem,snd_intel8x0m,snd_ac97_codec,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device serio_raw 12990 0 i2c_algo_bit 13184 1 i915 soundcore 12600 1 snd pcmcia_core 21505 3 pcmcia,yenta_socket,pcmcia_rsrc video 19112 1 i915 ath_pci 183044 0 snd_page_alloc 14073 5 snd_atiixp_modem,snd_via82xx_modem,snd_intel8x0m,snd_hda_intel,snd_pcm wlan 224640 5 wlan_tkip,wlan_scan_sta,ath_rate_sample,ath_pci ath_hal 398701 3 ath_rate_sample,ath_pci lp 13349 0 parport 36746 3 parport_pc,ppdev,lp usbhid 41704 0 hid 77084 1 usbhid sky2 49172 0 Output 3 root@chandru-acer:~# lshw -C network PCI (sysfs) *-network description: Ethernet interface product: 88E8038 PCI-E Fast Ethernet Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 14 serial: 00:16:36:fb:aa:64 capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.28 firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:43 memory:44000000-44003fff ioport:2000(size=256) *-network description: Wireless interface product: AR2413 802.11bg NIC vendor: Atheros Communications Inc. physical id: 3 bus info: pci@0000:0a:03.0 logical name: wifi0 version: 01 serial: 00:19:7d:d3:0c:fd width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list logical ethernet physical wireless configuration: broadcast=yes driver=ath_pci ip=192.168.1.6 latency=96 maxlatency=28 mingnt=10 multicast=yes wireless=IEEE 802.11g resources: irq:18 memory:d0000000-d000ffff Output 4 root@chandru-acer:~# lsmod | grep ath_pci ath_pci 183044 0 wlan 224640 5 wlan_tkip,wlan_scan_sta,ath_rate_sample,ath_pci ath_hal 398701 3 ath_rate_sample,ath_pci

    Read the article

  • Profiling NetBeans 7.0 Beta 2 and Reporting Problems

    - by christopher.jones
    With NetBeans 7.0 recently going into Beta 2 phase, now is the time to test it out properly and report issues. The development team has been squashing bugs, including memory issues with the PHP bundle.There are some great new PHP related features in NetBeans 7.0, so you know you want to try it out.If you identify something wrong with NetBeans, please report it following the guidelines http://wiki.netbeans.org/IssueReportingGuidelinesDepending on the issues, data to attach to the report is mentioned on: http://wiki.netbeans.org/FaqLogMessagesFile and http://wiki.netbeans.org/FaqProfileMeNowIf you have a memory issue then a memory dump would also be useful. Run the jmap tool for this. There is some background information on http://wiki.netbeans.org/FaqMemoryDump. Here's how I used it.First I set my environment to match the JDK used by NetBeans. In my case I am using a nightly build so the JDK is in the configuration file under $HOME/netbeans-dev-201102210501:$ egrep netbeans_jdkhome $HOME/netbeans-dev-201102210501/etc/netbeans.conf netbeans_jdkhome="/home/cjones/src/jdk1.6.0_24" $ export JAVA_HOME=/home/cjones/src/jdk1.6.0_24 $ export PATH=$JAVA_HOME/bin:$PATH Next, I found the correct process number to examine:$ ps -ef | egrep 'netbeans|jdk'cjones   23230     1  0 16:07 ?        00:00:00 /bin/bash /home/cjones/netbeans-cjones   23438 23230  2 16:07 ?        00:00:09 /home/cjones/src/jdk1.6.0_24/binFinally I used the parent JDK process as the jmap argument:$ jmap -histo:live 23438 num     #instances         #bytes  class name----------------------------------------------   1:         12075        9028656  [I   2:         49535        6581920  <constMethodKlass>   3:         49535        3964128  <methodKlass>   4:         80256        3840776  <symbolKlass>   5:         36093        3635336  [C   6:          5095        3341312  <constantPoolKlass>   7:          5095        2486016  <instanceKlassKlass>   8:          4325        1961432  <constantPoolCacheKlass>   9:         18729        1763976  [B  10:         59952        1438848  java.util.HashMap$Entry  . . .This histogram memory report will help identify the kind of memory issues you are seeing. It may not be as complete as an often tens of megabyte jmap -dump:live,file=/tmp/nbheap.log 23438 heap dump, but is much more easily attached to a bug report.If you want to keep up to date with NetBeans, nightly builds are at: http://bits.netbeans.org/download/trunk/nightly/latest/zip/

    Read the article

  • SQL Server - Rebuilding Indexes

    - by Renso
    Goal: Rebuild indexes in SQL server. This can be done one at a time or with the example script below to rebuild all index for a specified table or for all tables in a given database. Why? The data in indexes gets fragmented over time. That means that as the index grows, the newly added rows to the index are physically stored in other sections of the allocated database storage space. Kind of like when you load your Christmas shopping into the trunk of your car and it is full you continue to load some on the back seat, in the same way some storage buffer is created for your index but once that runs out the data is then stored in other storage space and your data in your index is no longer stored in contiguous physical pages. To access the index the database manager has to "string together" disparate fragments to create the full-index and create one contiguous set of pages for that index. Defragmentation fixes that. What does the fragmentation affect?Depending of course on how large the table is and how fragmented the data is, can cause SQL Server to perform unnecessary data reads, slowing down SQL Server’s performance.Which index to rebuild?As a rule consider that when reorganize a table's clustered index, all other non-clustered indexes on that same table will automatically be rebuilt. A table can only have one clustered index.How to rebuild all the index for one table:The DBCC DBREINDEX command will not automatically rebuild all of the indexes on a given table in a databaseHow to rebuild all indexes for all tables in a given database:USE [myDB]    -- enter your database name hereDECLARE @tableName varchar(255)DECLARE TableCursor CURSOR FORSELECT table_name FROM information_schema.tablesWHERE table_type = 'base table'OPEN TableCursorFETCH NEXT FROM TableCursor INTO @tableNameWHILE @@FETCH_STATUS = 0BEGINDBCC DBREINDEX(@tableName,' ',90)     --a fill factor of 90%FETCH NEXT FROM TableCursor INTO @tableNameENDCLOSE TableCursorDEALLOCATE TableCursorWhat does this script do?Reindexes all indexes in all tables of the given database. Each index is filled with a fill factor of 90%. While the command DBCC DBREINDEX runs and rebuilds the indexes, that the table becomes unavailable for use by your users temporarily until the rebuild has completed, so don't do this during production  hours as it will create a shared lock on the tables, although it will allow for read-only uncommitted data reads; i.e.e SELECT.What is the fill factor?Is the percentage of space on each index page for storing data when the index is created or rebuilt. It replaces the fill factor when the index was created, becoming the new default for the index and for any other nonclustered indexes rebuilt because a clustered index is rebuilt. When fillfactor is 0, DBCC DBREINDEX uses the fill factor value last specified for the index. This value is stored in the sys.indexes catalog view. If fillfactor is specified, table_name and index_name must be specified. If fillfactor is not specified, the default fill factor, 100, is used.How do I determine the level of fragmentation?Run the DBCC SHOWCONTIG command. However this requires you to specify the ID of both the table and index being. To make it a lot easier by only requiring you to specify the table name and/or index you can run this script:DECLARE@ID int,@IndexID int,@IndexName varchar(128)--Specify the table and index namesSELECT @IndexName = ‘index_name’    --name of the indexSET @ID = OBJECT_ID(‘table_name’)  -- name of the tableSELECT @IndexID = IndIDFROM sysindexesWHERE id = @ID AND name = @IndexName--Show the level of fragmentationDBCC SHOWCONTIG (@id, @IndexID)Here is an example:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 915- Extents Scanned..............................: 119- Extent Switches..............................: 281- Avg. Pages per Extent........................: 7.7- Scan Density [Best Count:Actual Count].......: 40.78% [115:282]- Logical Scan Fragmentation ..................: 16.28%- Extent Scan Fragmentation ...................: 99.16%- Avg. Bytes Free per Page.....................: 2457.0- Avg. Page Density (full).....................: 69.64%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's important here?The Scan Density; Ideally it should be 100%. As time goes by it drops as fragmentation occurs. When the level drops below 75%, you should consider re-indexing.Here are the results of the same table and clustered index after running the script:DBCC SHOWCONTIG scanning 'Tickets' table...Table: 'Tickets' (1829581556); index ID: 1, database ID: 13TABLE level scan performed.- Pages Scanned................................: 692- Extents Scanned..............................: 87- Extent Switches..............................: 86- Avg. Pages per Extent........................: 8.0- Scan Density [Best Count:Actual Count].......: 100.00% [87:87]- Logical Scan Fragmentation ..................: 0.00%- Extent Scan Fragmentation ...................: 22.99%- Avg. Bytes Free per Page.....................: 639.8- Avg. Page Density (full).....................: 92.10%DBCC execution completed. If DBCC printed error messages, contact your system administrator.What's different?The Scan Density has increased from 40.78% to 100%; no fragmentation on the clustered index. Note that since we rebuilt the clustered index, all other index were also rebuilt.

    Read the article

  • JDeveloper does not recognize existing subversion working directory

    - by Bob Webster
    Just a quick note about an issue where JDeveloper no longer recognized an existing subversion working directory. Symptom:  JDeveloper Versioning menu offers to Version an Application that is already versioned in svn. Cause: The repository url contained in the hidden .svn folders of the working directory is no longer valid. Solution: Determine the correct url for the Subversion repository and update the .svn working directory.Fix the url contained in the svn folders of the working directory using the svn switch command. Example:           In a shell change directory to the Application folder.           Run the svn info command to confirm the current settings.                $ svn info                   Path: .                   URL: http://192.168.1.128/repos/jdeveloperrepo/AsyncExamples/BPELCallAsync/trunk                   Repository Root: http://192.168.1.128/repos/jdeveloperrepo                   Repository UUID: 3dc5eb88-3001-0010-8d6e-fd6f73825647                   Revision: 145                   Node Kind: directory                   Schedule: normal                   Last Changed Rev: 145                   Last Changed Date: 2012-06-07 07:15:56 -0700 (Thu, 07 Jun 2012)            In this case, the IP address in the repository URL is incorrect,           the svn server is located at 192.168.56.1           Note: The IP Address currently set is displayed after the Project Name in the            Application Navigator.  See the screen snapshot above.            Run the svn switch command with the --relocate option            Provide as much of the urls as necessary to correctly rewrite the url from current to new.            For example,            to change the repository server address from 192.168.1.128   to   192.168.56.1                     $  svn switch --relocate  http://192.168.1.128   http://192.168.56.1  .                               (Note the trailing period in the above command)           When the url is correct, JDeveloper should recognize the Subversion Working Directory.

    Read the article

  • how to Use JAXWS/JAXB rename the parameter

    - by shrimpy
    I use CXF(2.2.3) to compile the Amazon Web Service WSDL (http://s3.amazonaws.com/ec2-downloads/2009-07-15.ec2.wsdl) But got error as below. Parameter: snapshotSet already exists for method describeSnapshots but of type com.amazonaws.ec2.doc._2009_07_15.DescribeSnapshotsSetType instead of com.amazonaws.ec2.doc._2009_07_15.DescribeSnapshotsSetResponseType. Use a JAXWS/JAXB binding customization to rename the parameter. The conflict was due to the data type show below: <xs:complexType name="DescribeSnapshotsType"> <xs:sequence> <xs:element name="snapshotSet" type="tns:DescribeSnapshotsSetType"/> </xs:sequence> </xs:complexType> <xs:complexType name="DescribeSnapshotsResponseType"> <xs:sequence> <xs:element name="requestId" type="xs:string"/> <xs:element name="snapshotSet" type="tns:DescribeSnapshotsSetResponseType"/> </xs:sequence> </xs:complexType> I create a binding file try to address the issue...but it didn`t do the job <jaxws:bindings xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" wsdlLocation="EC2_2009-07-15.wsdl" xmlns="http://java.sun.com/xml/ns/jaxws" xmlns:jxb="http://java.sun.com/xml/ns/jaxb" xmlns:jaxws="http://java.sun.com/xml/ns/jaxws"> <enableWrapperStyle>false</enableWrapperStyle> <jaxws:bindings node="wsdl:definitions/wsdl:types/xs:schema[@targetNamespace='http://ec2.amazonaws.com/doc/2009-07-15/']"> <jxb:bindings node="xs:complexType[@name='tns:DescribeSnapshotsType']//xs:element[@name='snapshotSet']"> <jxb:property name="snapshotRequestSet"/> </jxb:bindings> <jxb:bindings node="xs:complexType[@name='DescribeSnapshotsResponseType']//xs:element[@name='snapshotSet']"> <jxb:property name="snapshotResponseSet"/> </jxb:bindings> </jaxws:bindings> </jaxws:bindings> And the command i used, was like below <wsdlOptions> <wsdlOption> <wsdl>${basedir}/src/main/resources/wsdl/EC2_2009-07-15.wsdl</wsdl> <extraargs> <extraarg>-b</extraarg> <extraarg>${basedir}/src/main/resources/wsdl/Bindings_EC2_2009-07-15.xml</extraarg> </extraargs> </wsdlOption> </wsdlOptions> What is wrong with my code???? And you can check out my project by using svn.... svn co http://shrimpysprojects.googlecode.com/svn/trunk/smartcrc/AWSAgent/

    Read the article

  • Installing Monodevelop from the SVN on Ubuntu 10.04

    - by celil
    I wrote the following script to install the svn version of MonoDevelop #!/usr/bin/env bash PREFIX=/opt/local check_errs() { if [[ $? -ne 0 ]]; then echo "${1}" exit 1 fi } download() { if [ ! -d ${1} ] then svn co http://anonsvn.mono-project.com/source/trunk/${1} else (cd ${1}; svn update) fi } download mono download mcs download libgdiplus ( cd mono ./autogen.sh --prefix=$PREFIX make make install check_errs ) ( cd libgdiplus ./autogen.sh --prefix=$PREFIX make make install check_errs ) download monodevelop export PKG_CONFIG_PATH=${PREFIX}/lib/pkgconfig ( cd monodevelop ./configure --prefix=$PREFIX --select check_errs make check_errs ) Everything works fine until the last make step for the monodevelop pacakge, where the script exits with the error: ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(320,82): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `SyncMethod' and no extension method `SyncMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(325,49): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `SyncMethod' and no extension method `SyncMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(345,115): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `SyncMethod' and no extension method `SyncMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(365,82): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `BeginMethod' and no extension method `BeginMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) Compilation failed: 4 error(s), 1 warnings make[4]: *** [../../../build/AddIns/MonoDevelop.WebReferences/MonoDevelop.WebReferences.dll] Error 1 make[4]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main/src/addins/MonoDevelop.WebReferences' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main/src/addins' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main' make: *** [all-recursive] Error 1 Any ideas on how to fix this? I suppose the build gets mixed up with the default installation of mono in Ubuntu, and is looking for a symbol that is not present there. My build configuration looks as follows: 1. [X] main 2. [ ] extras/JavaBinding 3. [ ] extras/BooBinding 4. [X] extras/ValaBinding 5. [ ] extras/AspNetEdit 6. [ ] extras/GeckoWebBrowser 7. [ ] extras/WebKitWebBrowser 8. [ ] extras/MonoDevelop.Database 9. [ ] extras/MonoDevelop.Profiling 10. [ ] extras/MonoDevelop.AddinAuthoring 11. [ ] extras/MonoDevelop.CodeAnalysis 12. [ ] extras/MonoDevelop.Debugger.Mdb 13. [ ] extras/MonoDevelop.Debugger.Gdb 14. [ ] extras/PyBinding 15. [ ] extras/MonoDevelop.IPhone 16. [ ] extras/MonoDevelop.MeeGo

    Read the article

  • Android stream to Wowza

    - by Curtis Kiu
    I feel very confused about Android streaming to wowza. I am doing a video conference using rtmp cross-platform, but Android doesn't eat RTMP. Therefore I need to find another way to do it. Upstreaming I found a new open-source app called spydroid-ipcamera. It is using rtp, sending udp packets to computer, and opens it in vlc using the following sdp v=0 s=Unnamed m=video 5006 RTP/AVP 96 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1;profile-level-id=420016;sprop-parameter-sets=Z0IAFukBQHsg,aM4BDyA=; But it can't work. Then I follow wowza tutorial and stream to it and then play again in VLC. That works! I wrote it in http://code.google.com/p/spydroid-ipcamera/issues/detail?id=2 However when I want to add audio in the packet, it fails to work. I change to code in http://code.google.com/p/spydroid-ipcamera/source/browse/trunk/src/net/mkp/spydroid/CameraStreamer.java mr.setAudioSource(MediaRecorder.AudioSource.MIC); mr.setVideoSource(MediaRecorder.VideoSource.CAMERA); mr.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4); mr.setVideoFrameRate(20); mr.setVideoSize(640, 480); mr.setAudioEncoder(MediaRecorder.AudioEncoder.AAC); mr.setVideoEncoder(MediaRecorder.VideoEncoder.H264); mr.setPreviewDisplay(holder.getSurface()); Then I thought that the problem should be in sdp, but I don't know how to due with sdp. I am streaming H.264/AAC with Mp4 Second I don't understand sdp. So how can I make video conference upstreaming part using this apps. Android ----(UDP Port:5006)----> PC (SDP file) and then Wowza read the SDP file ------> VLC I think in this way the system cannot handle more than 1 client. sdp can only hold 1 port, any idea or actually it wont' work? Also Wowza need to set the stream before we stream it, so does it mean that I should not follow this way to do it? Sorry my English is poor, I hope you guys understand.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32  | Next Page >