Search Results

Search found 17958 results on 719 pages for 'local delivery'.

Page 553/719 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • What changes were made to a document

    - by Daniel Moth
    Part of my job is writing functional specs. Due to the inevitable iterative and incremental nature of software design/development, these specs need to be updated with additions/deletions/changes over a period of time. When the time comes for a developer to implement features or update their design document (or a tester to test the feature or update their test specs) they need to be doing that against the latest spec. The problem is that if they have reviewed this document already, they need a quick way to find the delta from the last time they reviewed it to see what changes exist and how their existing plans may be affected (instead of having to read the entire document again). Doing that is very easy assuming your Word documents are hosted on SharePoint. 1. Every time you review a document note the SharePoint version and/or date (if it is a printed copy, make sure your printout includes the date in the footer – all my specs do) 2. When you need to see what changed, open the document (make sure you are not using a cached or local offline copy) and on the ribbon go to the "Review" tab and then  click on the "Compare" button. 3. Click on the "Specific Version…" option. In the dialog that pops up pick the last version you reviewed and click the "Compare" button. [TIP for authors: before checkin of your document, always compare against the "Last Version" on the SharePoint so you can add appropriate more complete check in comments] 4. What you see now is that in addition to the document you have open, two other documents just opened up. One is in the background (flashing on your task bar) – close that one as it is the old version. 5. The other document is in the foreground and contains all the changes between the old version and the latest one. Be sure not to make edits to this document, use it only for reading the changes. To find all the changes, on the ribbon under the "Review" tab, click on the "Reviewing Pane" to open the reviewing pane on the left. You can now click on each pink change to see what it is. 6. When you are done reviewing changes close the document and don't save any changes (remember if you want to make edits/additions/comments make them in the original document which is still open). And now I have a URL to point to people that keep asking about this – enjoy  :-) Comments about this post welcome at the original blog.

    Read the article

  • Upgrading Code from 2007 to 2010

    - by MOSSLover
    So I’ve been doing some upgrades just to see if things will work from 2007 to 2010.  So far most of the stuff I want works, but obviously there are some things that break.  Did you guys know that in 2007 you could add a webpart to the view pages for lists and libraries without losing the toolbar?  In 2010 the ribbon disappears every time you add a webpart.  So if you are using Scot Hillier’s Codeplex project to hide buttons it will not work the same way, because the ribbon is going to disappear altogether. I have also learned another reason why standalone installations are the bane of my existence.  Nine times out of ten the installation is done using Network Service as the application pool account.  You are wondering why is this bad?  Well, let’s just say the site collection administrator with local admin rights wants to attach the IIS Worker process and debug say a webpart.  Visual Studio 2010 will throw a nasty error that tells you that you are not an administrator.  You will say, but I am an administrator?  I have all the correct group permissions on the server and on SQL and in SharePoint.  Then you will go in and decide let’s add my own admin account just to see if I can attach the debugger and you will notice that works properly.  So the morale of the story is create a separate account on your development environment to run all the SharePoint Services and such.  You don’t need to go all out and create the best practices amount of accounts if it’s just your dev environment.  I would at least create one single account to run all your SharePoint process (Services, SQL, and App Pool).  Also, don’t run a standalone install unless you want to kill kittens (this is a quote from Todd Klindt).  We love kittens they are cute and awesome.  Besides you learn more if you click Complete and just skip standalone.  You will learn how to setup SQL Server 2008 and you will learn how to configure your environment.  It will help you in the long run.  So I have ranted enough for today I figure these are enough tidbits for you this time around.  The two of you who read my blog and I know some of you are friends who don’t understand SharePoint.  I might as well have just done “wahwahwahwah” in Charlie Brown adult speak.  Thanks for reading as usual.  I’ll catch you all when I complain more about the upgrade process and share more tidbits, which will inevitably become a presentation at a conference or two. Technorati Tags: Upgrade Code SharePoint 2007 to 2010,Visuaul Studio 2010,SharePoint 2010

    Read the article

  • My Reference for Amy Lewis

    - by Denise McInerney
    The 2013 election campaign for the PASS Board of Directors is underway. There are seven qualified candidates running this year. They all offer a wealth of experience volunteering for PASS and the SQL Server community. One of these candidates, Amy Lewis, asked me to write a reference for her to include on her candidate application. I have a lot of experience working with Amy and was pleased to provide this reference: I enthusiastically support Amy Lewis as a candidate for the PASS Board of Directors. I have known and worked with Amy in various PASS' volunteer capacities for years, starting when we were both leaders of SIGs (the precursors to the Virtual Chapters.) In that time I have seen Amy grow as a leader, taking on increasing responsibility and developing her leadership skills in the process. From the Program Committee to the BI Virtual Chapter to her local user group's SQL Saturday Amy has demonstrated a capacity to organize and lead volunteers. A successful leader delivers results, and does so in a way that encourages and empowers the people she is working with; Amy embodies this leadership style. As Director for Virtual Chapters I have most recently worked with Amy in her capacity of DW/BI VC Leader. This VC is one of our largest and most active, and Amy's leadership is a key contribution to that success. I was pleased to see that Amy was also thinking about succession and prepared other volunteers to take over the chapter leadership. Amy has shown an understanding of PASS' strategic goals and has focused her volunteer efforts to help us reach those goals. For the past couple of years we have been trying to expand PASS reach and relevance to SQL communities around the world. The VCs are a key vehicle for this expansion. Amy embraced this idea and organized the VC to engage volunteers in Europe & Australia and provide content that could reach SQL professionals in those regions. A second key strategy for PASS is expanding into the data analytics space. Again Amy rose to the occasion helping to shape the program for our first Business Analytics Conference and leveraging the BI VC to promote the event. By all measures I think Amy is prepared to serve on the Board and contribute in a positive way.

    Read the article

  • Networking Guidelines

    - by ACShorten
    One of the things I have noticed in my years in IT is the changes in networking. In the past networking was pretty simple with the host name and name resolution (via DNS) being pretty simple. Some sites still use this simple networking setup. These days, more complex name resolution, proxies, firewalls, demarcation nd virtualization, can make networking more complex. This can cause issues when installing products with in built networking that can frustrate even seasoned veterans. I have put together a few basic guidelines to hopefully help along with product installation and getting a product to operate in a somewhat complex network setup. All the components of the product (including the infrastructure) need to communicate via a network (even it is within a local machine/host). Ensure any host names referred to within configuration files are accessible via your networking setup. This may mean defining the hosts to the machines, to the DNS for name resolution and even your firewall to allow machines to communicate within your network. Make sure the ports used for any of the infrastructure are accessible (even through your firewall) and are unique within the host. Host duplication can cause the product to fail on startup as the port is already in use. If there are still issues, consider using localhost as your host name. I have used this in so many situations that I tend to use it now as a default anytime I install anything myself. Most Oracle products suggest to use localhost when using dynamic host or dynamic IP addresses and this is no different for the Oracle Utilities Application Framework. If you do use localhost then installing a Loopback Adapter for the operating system is recommended to force networking to a minimum. Usually localhost resolves to 127.0.0.1. When using multiple network connections, especially in a virtualized environment, ensure the host and ports used are relevent for the network cards you have setup. One of the common issues is finding the product is using a vierualized network card only to find that it is not setup for correct networking. If you are using the batch component, do not forget to ensure that the multicast protocol is enabled on your host and that the multicast address and port number specified are valid and accessible from all machines in the batch cluster (if clustering used). The same advice applies if you are using unicast where each host/port combination should be accessible. Hopefully these basic networking recommendations will help minimize any networking issues you might encounter.

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • Can not connect to wireless on 12.04 with Intel WiFi Link 5100

    - by WiData
    I am having problem in connecting to wifi. I have dual boot (Windows 7 and Ubuntu 12.04) on my Dell Studio 15. I upgraded to 12.04 quite some time ago (at least one month) from 11.10. Everything was working fine till yesterday. Since yesterday I can see the list of available Wifi connection but does not connect to any or if connects (after hours of trying) then disconnects after few minutes. My wifi interface is Intel WiFi Link 5100 AGN. However the problem is on both Windows and Ubuntu. Here are outputs of some commands which may be useful for those interested in helping: ~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:22:19:fa:65:bb UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:794 errors:0 dropped:0 overruns:0 frame:0 TX packets:794 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:56280 (56.2 KB) TX bytes:56280 (56.2 KB) wlan0 Link encap:Ethernet HWaddr 00:22:fb:d2:fc:ce UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:239 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:53603 (53.6 KB) Here is the output for the command sudo lshw -C network *-network description: Wireless interface product: WiFi Link 5100 vendor: Intel Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: wlan0 version: 00 serial: 00:22:fb:d2:fc:ce width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-48-generic-pae firmware=8.83.5.1 build 33692 latency=0 link=no multicast=yes wireless=IEEE 802.11abgn resources: irq:47 memory:f8000000-f8001fff My kernel version is kernel version 3.2.0-48-generic-pae I also checked this post which was helpful. But I am not sure if what is the exact problem. Any suggestions will be helpful. Should I be changing the firmware/driver? Currently my /lib/firmware has following iwlwifi files /lib/firmware/iwlwifi-1000-5.ucode /lib/firmware/iwlwifi-5000-5.ucode /lib/firmware/iwlwifi-100-5.ucode /lib/firmware/iwlwifi-5150-2.ucode /lib/firmware/iwlwifi-105-6.ucode /lib/firmware/iwlwifi-6000-4.ucode /lib/firmware/iwlwifi-135-6.ucode /lib/firmware/iwlwifi-6000g2a-5.ucode /lib/firmware/iwlwifi-2000-6.ucode /lib/firmware/iwlwifi-6000g2a-6.ucode /lib/firmware/iwlwifi-2030-6.ucode /lib/firmware/iwlwifi-6000g2b-6.ucode /lib/firmware/iwlwifi-3945-2.ucode /lib/firmware/iwlwifi-6050-5.ucode /lib/firmware/iwlwifi-4965-2.ucode Thanks a lot for the help.

    Read the article

  • Development-led security vs administration-led security in a software product?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers that do not listen to this attribute, strange hacks* are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingeniosity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • Commit Review Questions

    - by Wes McClure
    Note: in this article when I refer to a commit, I mean the commit you plan to share with the rest of the team, if you have local commits that you plan to amend/combine, I am referring to the final result. In time you will find these easier to do as you develop, however, all of these are valuable before checking in!  The pre commit review is a nice time to polish what might have been several hours of intense work, during which these things were the last things on your mind!  If you are concerned about losing your work in the process of responding to these questions, first do a check-in and amend it as you go (assuming you are using a tool such as git that supports this), rolling the result into one nice commit for everyone else.  Did you review your commit, change by change, with a diff utility? If not, this is a list of reasons why you might want to start! Did you test your changes? If the test is valuable to be automated, is it? If it’s a manual testing scenario, did you at least try the basics manually? Are the additions/changes formatted consistently with the rest of the project? Lots of automated tools can help here, don’t try to manually format the code, that’s a waste of time and as a human you will fail repeatedly. Are these consistent: tabs versus spaces, indentation, spacing, braces, line breaks, etc Resharper is a great example of a tool that can automate this for you (.net) Are naming conventions respected? Did you accidently use abbreviations, unless you have a good reason to use them? Does capitalization match the conventions in the project/language? Are files partitioned? Sometimes we add new code in existing files in a pinch, it’s a good idea to split these out if they don’t belong ie: are new classes defined in new files, if this is something your project values? Is there commented out code? If you are removing an existing feature, get rid of it, that is why we have VCS If it’s not done yet, then why are you checking it in? Perhaps a stash commit (git)? Did you leave debug or unnecessary changes? Do you understand all of the changes? http://geekswithblogs.net/wesm/archive/2012/04/11/programming-doesnrsquot-have-to-be-magic.aspx Are there spelling mistakes? Including your commit message! Is your commit message concise? Is there follow up work? Are there tasks you didn’t write down that you need to follow up with? Are readability or reorganization changes needed? This might be amended into the final commit, or it might be future work that needs added to the backlog. Are there other things your team values that you should review?

    Read the article

  • Game Center: Leaderboard score inconsistencies

    - by Hasyimi Bahrudin
    Background I'm currently developing a simple library that mirrors Game Center's functionalities locally. Basically, this library is a system that manages achievements and leaderboards, and optionally sync it with the Game Center. So, if the game is not GC enabled, the game will still have achievements and leaderboards (stored inside a plist). But of course, the leaderboards will then only contain the local player's scores (which is kind of useless, I know :P). Problem Currently I have coded both of the achievements and leaderboards subsystems. The achievements subsystem have already been tested and it works. I'm currently testing the leaderboards subsystem using multiple test user accounts. I loaded the test app on a device and on the simulator, both logged in with 2 different user accounts. Then I performed these steps: I first used the device to upload a score. Then, I ran the simulator, and the score submitted by the user on the device is shown. Which is cool. Then, I used the simulator to upload a score. But on the device, still, only one score is listed. I checked on the Game Center app (to see if the bug lies within my code), and I got the same thing. Under "All players", there is only one score on the device, but there are 2 scores on the simulator. I wanted to make sure that the simulator is not causing this, so I swapped the users on the device and the simulator, and the result is still the same. In other words, the first user is oblivious of the second user's score, but the second user can see the first user's score. Then I tried with a third user. The result: the third user can only see the scores of the first user and himself. The second user still sees the scores of the first user and himself. The first user only sees his own score. Now here comes the weird part. I then make the first user and the second user befriend each other. The result: under "Friends", the first user can see the second user's score, but under "All Players", the first user's score is the only one listed. Screenshots The first user sees this: The second user sees this: So, is this a normal thing when using sandboxed GC accounts? Is this behavior documented somewhere by Apple?

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • How do I get wireless working on a Dell Inspiron 510m?

    - by user17449
    Why WiFi don't work in my Dell Inspiron 510m with Ubuntu 10.04? Is that usefull? inspiron@Inspiron:~$ rfkill list all inspiron@Inspiron:~$ sudo lshw -C network [sudo] password for inspiron: *-network:0 DISABLED description: Wireless interface product: PRO/Wireless LAN 2100 3B Mini PCI Adapter vendor: Intel Corporation physical id: 3 bus info: pci@0000:01:03.0 logical name: eth1 version: 04 serial: 00:0c:f1:5b:5d:40 width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ipw2100 driverversion=git-1.2.2 firmware=712.0.3:3:00000001 latency=32 link=no maxlatency=34 mingnt=2 multicast=yes wireless=unassociated resources: irq:5 memory:fcffe000-fcffefff *-network:1 description: Ethernet interface product: 82801DB PRO/100 VE (MOB) Ethernet Controller vendor: Intel Corporation physical id: 8 bus info: pci@0000:01:08.0 logical name: eth0 version: 81 serial: 00:11:43:41:d8:b8 size: 10MB/s capacity: 100MB/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e100 driverversion=3.5.24-k2-NAPI duplex=half firmware=N/A ip=192.168.0.2 latency=32 link=no maxlatency=56 mingnt=8 multicast=yes port=MII speed=10MB/s resources: irq:11 memory:fcffd000-fcffdfff ioport:ecc0(size=64) inspiron@Inspiron:~$ iwconfig wlan0 wlan0 No such device inspiron@Inspiron:~$ ifconfig -a eth0 Link encap:Ethernet Endereço de HW 00:11:43:41:d8:b8 inet end.: 192.168.0.2 Bcast:192.168.0.255 Masc:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Métrica:1 pacotes RX:0 erros:0 descartados:0 excesso:0 quadro:0 Pacotes TX:0 erros:0 descartados:0 excesso:0 portadora:0 colisões:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet Endereço de HW 00:0c:f1:5b:5d:40 BROADCAST MULTICAST MTU:1500 Métrica:1 pacotes RX:0 erros:0 descartados:0 excesso:0 quadro:0 Pacotes TX:0 erros:0 descartados:0 excesso:0 portadora:0 colisões:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) IRQ:5 Endereço de E/S:0xe000 Memória:fcffe000-fcffefff lo Link encap:Loopback Local inet end.: 127.0.0.1 Masc:255.0.0.0 endereço inet6: ::1/128 Escopo:Máquina UP LOOPBACK RUNNING MTU:16436 Métrica:1 pacotes RX:628 erros:0 descartados:0 excesso:0 quadro:0 Pacotes TX:628 erros:0 descartados:0 excesso:0 portadora:0 colisões:0 txqueuelen:0 RX bytes:50104 (50.1 KB) TX bytes:50104 (50.1 KB) inspiron@Inspiron:~$ nm-tool NetworkManager Tool State: connected - Device: eth1 ----------------------------------------------------------------- Type: 802.11 WiFi Driver: ipw2100 State: unavailable Default: no HW Address: 00:0C:F1:5B:5D:40 Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: e100 State: unmanaged Default: no HW Address: 00:11:43:41:D8:B8 Capabilities: Carrier Detect: yes Speed: 10 Mb/s Wired Properties Carrier: off inspiron@Inspiron:~$

    Read the article

  • Segmentation fault 11 in MacOS X- C++ [migrated]

    - by Marcos Cesar Vargas Magana
    all. I have a "segmentation fault 11" error when I run the following code. The code actually compiles but I get the error at run time. //** Terror.h ** #include <iostream> #include <string> #include <map> using std::map; using std::pair; using std::string; template<typename Tsize> class Terror { public: //Inserts a message in the map. static Tsize insertMessage(const string& message) { mErrorMessages.insert( pair<Tsize, string>(mErrorMessages.size()+1, message) ); return mErrorMessages.size(); } private: static map<Tsize, string> mErrorMessages; } template<typename Tsize> map<Tsize,string> Terror<Tsize>::mErrorMessages; //** error.h ** #include <iostream> #include "Terror.h" typedef unsigned short errorType; typedef Terror<errorType> error; errorType memoryAllocationError=error::insertMessage("ERROR: out of memory."); //** main.cpp ** #include <iostream> #include "error.h" using namespace std; int main() { try { throw error(memoryAllocationError); } catch(error& err) { } } I have kind of debugging the code and the error happens when the message is being inserted in the static map member. An observation is that if I put the line: errorType memoryAllocationError=error::insertMessage("ERROR: out of memory."); inside the "main()" function instead of at global scope, then everything works fine. But I would like to extend the error messages at global scope, not at local scope. The map is defined static so that all instances of "error" share the same error codes and messages. Do you know how can I get this or something similar. Thank you very much.

    Read the article

  • Great opportunity to try Windows Azure over the next 7 days if you are a UK developer &ndash; act to

    - by Eric Nelson
    Are you a UK based developer who has been put off from trying out the Windows Azure Platform? Were you concerned that you needed to hand over credit card details even to use the introductory offer? Or concerned about how many charges you might run up as you played with “elastic computing”. Then we might have just what you need. 7 Days of access to the Windows Azure Platform – for FREE (expires June 6th 2010) If you are accepted, you will be given a Windows Azure Platfom subscription that will enable you to create Windows Azure hosted services and storage accounts, SQL Azure databases and AppFabric services without any fear of being charged between now and Sunday the 6th of June 2010. No credit card is required. Important: At the end of Sunday your subscription and all your code and data you have uploaded will be deleted. It is your responsibility to keep local copies of your code and data. Apply now To apply for this offer you need to: email ukdev AT microsoft.com with a subject line that starts “UKAZURETRAIL:” (This must  be present) In the email you need to demonstrate you are UK based (.uk email alias or address or… be creative) And you must include 30 to 100 words explaining What your interest is in the Windows Azure Platform and Cloud Computing What you would use the 7 days to explore Some notes (please read!): We have a limited number of these offers to give away on a first come, first served basis (subject to meeting the above criteria). We plan to process all request asap – but there is a UK bank holiday weekend looming. We will do our best to process all by Tues afternoon (which would still give you 5 days of access) There will be no specific support for this offer. We will not be processing any requests that arrive after Tuesday 1st. In case you were wondering, there is no equivalent offer for developer outside of the UK. This offer is a direct result of UK based training we are currently doing which has some spare Azure capacity which we wanted to make best use of. Sorry in advance if you based outside of the UK. Related Links: If you are UK based, you should also join the UK Windows Azure Platform community http://ukazure.ning.com Microsoft UK Windows Azure Platform page

    Read the article

  • Windows Azure Diagnostics: Next to Useless?

    - by Your DisplayName here!
    To quote my good friend Christian: “Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” <rant> The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggreate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure. Now comes Windows Azure. I was already were grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed. The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good. But guess what happens to your nice trace files: the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??). Every trace line becomes an entry in a Azure Storage Table – the svclog format is gone. So much for the existing tooling. To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that. Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting: “Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.” And the workaround: “To read the files yourself, log on to the instance with a remote desktop connection.” Now then have fun with your multi-instance deployments…. </rant>

    Read the article

  • Credentials Not Passed From SharePoint WebPart to WCF Service

    - by Jacob L. Adams
    I have spent several hours trying to resolve this problem, so I wanted to share my findings in case someone else might have the same problem. I had a web part which was calling out to a WCF service on another server to get some data. The code I had was essentially using System.ServiceModel; using System.ServiceModel.Channels; ... var binding = new CustomBinding( new HttpTransportBindingElement { AuthenticationScheme = System.Net.AuthenticationSchemes.Negotiate } ); var endpoint = new EndpointAddress(new Uri("http://someotherserver/someotherservice.svc")); var someOtherService = new SomeOtherServiceClient(binding, endpoint); string result = someOtherService.SomeServiceMethod(); This code would run fine on my local instance of SharePoint 2010 (Windows 7 64-bit). However, when I would deploy it to the testing environment, I would get a yellow screen of death  with the following message: The HTTP request is unauthorized with client authentication scheme 'Negotiate'. The authentication header received from the server was 'Negotiate,NTLM'. I then went through the usual checklist of Windows Authentication problems: Check WCF bindings to make sure authentication is set correctly Check IIS to make sure Windows Authentication is enabled and anonymous authentication was disabled. Check to make sure the SharePoint server trusted the server hosting the WCF service Verify that the account that the IIS application pool is running under has access to the other server I then spend lot of time digging into really obscure IIS, machine.config, and trust settings (as well of lots of time on Google and StackOverflow). Eventually I stumbled upon a blog post by Todd Bleeker describing how to run code under the application pool identity. Wait, what? The code is not already running under application pool identity? Another quick Google search led me to an MSDN page that imply that SharePoint indeed does not run under the app pool credentials by default. Instead SPSecurity.RunWithElevatedPrivileges is needed to run code under the app pool identity. Therefore, changing my code to the following worked seamlessly using System.ServiceModel; using System.ServiceModel.Channels; using Microsoft.SharePoint; ... var binding = new CustomBinding( new HttpTransportBindingElement { AuthenticationScheme = System.Net.AuthenticationSchemes.Negotiate } ); var endpoint = new EndpointAddress(new Uri("http://someotherserver/someotherservice.svc")); var someOtherService = new SomeOtherServiceClient(binding, endpoint); string result; SPSecurity.RunWithElevatedPrivileges(()=> { result = someOtherService.SomeServiceMethod(); });

    Read the article

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • SQL SERVER – Free eBook Download – EPUB, MOBI, PDF Format

    - by pinaldave
    Microsoft has released recently free eBooks on various Microsoft Technology. The best part is that all these books are available in ePub, Mobi and PDF. You can download them to your local machine or eBook reader and read them. This is a great start as many important subjects are now covered and converted into an eBook. I personally read through a few of the books and found they are very comprehensive and and detailed. The goal is not to cover complete technology in a single book but rather pick a single topic and discuss it in detail. The source of the book is white paper, Technet wiki as well book online and it is clearly listed right bellow the book title. Following are the books available for SQL Server Technology and I encourage all of you to have a look at them as they are great resources. Master Data Services Capacity Guidelines Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide QuickStart: Learn DAX Basics in 30 Minutes SQL Server 2012 Transact-SQL DML Reference You can download above eBooks from here. This is indeed a great attempt as each book talks about the a single subject in depth keeping author focus on the single and simple subject. I have previously written two books by focusing on the same subject and I had great pleasure writing it as well. Writing on focus subjects gives complete freedom to author to explore the a single subject without having burden to cover everything which is associated with that technology at large. Just like eBooks mentioned earlier my SQL Server Wait Stats was inspired from my article series on SQL Wait Stats. The latest book SQL Server Interview Question and Answers was derived from my article series on SQL Interview Q and A. Writing book is an absolutely different concept than writing blog posts. When I was converting my blog posts to books, I ended up writing 50% new material and end up removing many repetitive content which shows up in blog series. It was indeed fun to focused book at the same time it was a great learning experience as an individual. Reference: TechNet Wiki, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Saturday, January 22, 2011

    CodePlex Daily Summary for Saturday, January 22, 2011Popular ReleasesMinecraft Tools: Minecraft Topographical Survey 1.3: MTS requires version 4 of the .NET Framework - you must download it from Microsoft if you have not previously installed it. This version of MTS adds automatic block list updates, so MTS will recognize blocks added in game updates properly rather than drawing them in bright pink. New in this version of MTS: Support for all new blocks added since the Halloween update Auto-update of blockcolors.xml to support future game updates A splash screen that shows while the program searches for upd...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14996.000: New Features: ============= This release is just compiled against the latest release of JetBrains ReSharper 5.1.1766.4 Previous release: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes...TweetSharp: TweetSharp v2.0.0.0 - Preview 9: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 9 ChangesAdded support for lists and suggested users Fixes based on user feedback Third Party Library VersionsHammock v1.1.6: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comjqGrid ASP.Net MVC Control: Version 1.2.0.0: jqGrid 3.8 support jquery 1.4 support New and exciting features Many bugfixes Complete separation from the jquery, & jqgrid codeMediaScout: MediaScout 3.0 Preview 4: Update ReleaseCoding4Fun Tools: Coding4Fun.Phone.Toolkit v1: Coding4Fun.Phone.Toolkit v1MFCMAPI: January 2011 Release: Build: 6.0.0.1024 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeAutoLoL: AutoLoL v1.5.4: Added champion: Renekton Removed automatic file association Fix: The recent files combobox didn't always open a file when an item was selected Fix: Removing a recently opened file caused an errorDotNetNuke® Community Edition: 05.06.01: Major Highlights Fixed issue to remove preCondition checks when upgrading to .Net 4.0 Fixed issue where some valid domains were failing email validation checks. Fixed issue where editing Host menu page settings assigns the page to a Portal. Fixed issue which caused XHTML validation problems in 5.6.0 Fixed issue where an aspx page in any subfolder was inaccessible. Fixed issue where Config.Touch method signature had an unintentional breaking change in 5.6.0 Fixed issue which caused...MiniTwitter: 1.65: MiniTwitter 1.65 ???? ?? List ????? in-reply-to ???????? ????????????????????????? ?? OAuth ????????????????????????????ASP.net Ribbon: Version 2.1: Tadaaa... So Version 2.1 brings a lot of things... Have a look at the homepage to see what's new. Also, I wanted to (really) improve the Designer. I wanted to add great things... but... it took to much time. And as some of you were waiting for fixes, I decided just to fix bugs and add some features. So have a look at the demo app to see new features. Thanks ! (You can expect some realeses if bugs are not fixed correctly... 2.2, 2.3, 2.4....)iTracker Asp.Net Starter Kit: Version 3.0.0: This is the inital release of the version 3.0.0 Visual Studio 2010 (.Net 4.0) remake of the ITracker application. I connsider this a working, stable application but since there are still some features missing to make it "complete" I'm leaving it listed as a "beta" release. I am hoping to make it feature complete for v3.1.0 but anything is possible.ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6.1: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager changes: RenderView controller extension works for razor also live demo switched to razorBloodSim: BloodSim - 1.3.3.1: - Priority update to resolve a bug that was causing Boss damage to ignore Blood Shields entirelyRawr: Rawr 4.0.16 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of this release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Tracker. W...MvcContrib: an Outer Curve Foundation project: MVC 3 - 3.0.51.0: Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcContrib.TestHelper.dll MvcContrib.Extras.Release.zip T4MVC. The extra view engines / controller factories and other functionality which is in the project. This file includes the main MvcContrib assembly. Samples are included in the release. You do not need MvcContrib if you download the Extras.Yahoo! UI Library: YUI Compressor for .Net: Version 1.5.0.0 - Jalthi: Updated solution to VS2010. New: Work Item #4450 - Optional MSBuild task parameter :: Do not error if no files were found. Fixed: Work Item #5028 - Output file encoding is the same as the optional MSBuild task encoding argument. Fixed: Work Item #5824 - MSBuilds where slow, after the first build due to the Current Thread being forced to en-gb, on none en-gb systems. Changed: Work Item #6873 - Project license changed from MS-PL to GPLv2. New: Added all the unit tests from the Java YU...N2 CMS: 2.1.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. 2.1.1 Maintenance release List of changes 2.1 Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open...DNN Menu Provider: 01.00.00 (DNN 5.X and newer): DNN Menu Provider v1.0.0Our first release build is a stable release. It requires at least DotNetNuke 5.1.0. Previous DNN versions are not suported. Major Highlights: Easy to use template system build on a selfmade and powerful engine. Ready for multilingual websites. A flexible translation integration based on the ASP.NET Provider Model as well as an build in provider for DNN Localization (DNN 5.5.X > required) and an provider for EALO Translation API. Advanced in-memory caching function...VidCoder: 0.8.1: Adds ability to choose an arbitrary range (in seconds or frames) to encode. Adds ability to override the title number in the output file name when enqueing multiple titles. Updated presets: Added iPhone 4, Apple TV 2, fixed some existing presets that should have had weightp=0 or trellis=0 on them. Added {parent} option to auto-name format. Use {parent:2} to refer to a folder 2 levels above the input file. Added {title:2} option to auto-name format. Adds leading zeroes to reach the sp...New Projectsagap: Système de gestion d'application.Blog in Silverlight With BlogEngine: I love silverlight and want to build my blog in silverlight base on BlogEngine.NETCavemanTools: Easy to use toolkit with extension methods and special goodies for faster development, written in C# on .Net 3.5. CloudCompanion: Windows Azure performance measurement & analytics tools.Delivery Madness: XNA Game Development ExampledFactor: Visual Studio Add-in. Provides extended refactoring options.GpSpotMe: El objetivo final de GPSmapPoint es mostrar la posición del usuario sobre un mapa topográfico, pero eso no es mas que una escusa para aprender y divertirse programando. Esta desarrollado en C# sobre .NET Compact Framework 3.5 y puede ser usado en teléfonos con Windows Mobile 6.xInvestment Calculator: Allows you to evaluate how much money you could have today if you invested to one of the predefined investment opportunities in the database at a given date.jbucknor: jbucknorLoad Generation: Generates load according to a given pattern over a long period of time for the given number of users. It can vary the load in different patters such as linear, exponential etc. Load can be generated on custom or standard protocol Orchard Profile Module: Provides user profilesOrchard Super Classic Theme: This Orchard Theme is used to demonstrate the dynamic themes capabilities offered by Orchard. Based on the Classic Theme for Orchard, you can chose among different variations from the Dashboard.Orchard Youtube Field Module: Youtube field for orchard.Osac DataLayer: Database layer to simplify the use of database connection, sql queries and commands. Powerd by OSAC. http://www.osac.nlPascal Design: Pascal Design. pascal design is for users writing pascal application or to students learning pascal and don't need to workout the design and want to make same fast "screens". it's developed in C#PathEditor: This tool simplified the access to the environment path variable. No long line, a nice grid for editing easy and secure with directory checking. PLG Graphic File Analyzer: PLG Analyzer is an application that loads a graphic scene described in PLG format, parses the file and displays the information in it on the screen in 3D coordinates using OpenTK for C#. This project is a university project (homework, not final project) so please bear in mind :)Read Application Cofiguration in Javascript: Would it be good , if "AppSettings" is accessed with "Javascript.Configuration.ConfigurationManager.AppSettings["ServiceUrl"]"?Route#: A Microsoft .NET 4.0 C#/F# application to load routes and find minimal paths. The application is meant to let user create a network of nodes and connections representing a web to route. Starting from a point A the application will find the minimal path to B.Send SMS: Send SMS allows you to send SMS (Short Messaging Service) to all mobile operators across India. It allows you to send SMS through popular service providers like Way2SMS, Full On Sms, Tezsms, Sms Inside and more.SocialMe: This is a simple Facebook, MySpace & Twitter client, but small!StockViewer: This is an interactive way to browse stocks.Thailand IC2011 Training: Using for IC2011 training at KU. Topic. Windows Presentation Foundation Silverlight Windows Phone 7 TipTip: Online oblozuvanjeWeb Content Compressor: The objective of this project is to compress any Javascript (js file or inline script at some web files) and Cascading Style Sheets to an efficient level that works exactly as the original source, before it was minified. The library also is integrated with MSBuildwuyuhu: wuyuhuXaml Mvp: With Xaml Mvp you can create Windows Phone 7 applications based on the MVP pattern. By seperating Data from Behaviour you still get the benefit of MVVM without your View Model becoming a fat-controller. Silverlight and WPF coming soon! Developed using C#XnaWavPack: Native C# wavpack decoder, intended for use with the dynamic sound API in XNA4.

    Read the article

  • PMI South Florida Job Fair 2010

    - by Sam Abraham
    The South Florida Chapter of the Project Management Institute is planning a Job Fair slated for September 2010. This year has seen a significant improvement in the job market with many surveyed companies indicating their intention to add temporary or permanent staff to their workforce in the near future.   The Job Fair Initiative fits well within the chapter's message and goal for this year: "Exercising Social Responsibility" - Our responsibility as PMI volunteers at all levels towards our members and surrounding community.   Our Free-to-members Annual Job Fair will play an important role in connecting Recruiters, Exhibitors and Job Seekers together thereby helping hiring companies gain access to a large talent pool at an affordable cost (Totally free in certain cases, details to be revealed once finalized) while giving job seekers centralized access to many reputable hiring companies in the South Florida area.   My involvement in the 2010 Job Fair started with a good conversation I had with Bernie Saenz, President and CEO of the South Florida PMI Chapter, in a networking event a few months ago. I had approached him with a few ideas in line with his goal to serve the community and our members given today's difficult economic climate. Bernie indicated that the Project Manager for the 2010 Job Fair had just been appointed and invited me to participate in this important initiative as a member of her team. I simply couldn't resist and gladly accepted the invitation.   I chose an initial role as Recruiter Relations Lead which entails developing documentation and timelines for our project plan with regards to Recruiter Engagement as well as reaching out to recruiting companies to meet target representation at the Job Fair.   Being heavily involved in the local Technical community has afforded me the privilege of coming in contact with many reputable Technology Recruiting companies. (As a matter of fact, I already have 2 interested very reputable IT recruiting firms willing to join us at the fair)   The excitement for me however will be finding and reaching out to recruiters in areas of Project Management and Leadership that I might not have been exposed to before including Finance, Healthcare and Marketing, to name a few.   Keep an eye in the upcoming few weeks for official announcements on the PMI South Florida Job Fair 2010.   Environment.Exit(0);   -Sam Abraham Site Director - West Palm Beach .Net User Group Recruiter Relations Lead - PMI South Florida Job Fair 2010 Project Lead - Mentoring Programs- PMI South Florida

    Read the article

  • Using Movemail with Thunderbird on Ubuntu

    - by rxt
    I'm trying to read local mail with Thunderbird on Ubuntu (with both 12.04 and 13.04). I've followed the instructions found here: How can I access system mail in /var/mail/ via thunderbird? I can read mail on the system using alpine or vim, so I know the mailbox is not empty. When I click the get-mail button, nothing happens. I see no Inbox (or any folder structure) for the specific account. I've set the rights for /var/mail to 1777. Settings server name: localhost username: john How can I get this working? Okay, considering the extra bounty, I would like to get this working like normal mail. The accepted answer from Qasim resulted in a much more usable situation than before - opening mail in Thunderbird with layout. I still face three problems though. When new mail is received in the mailbox, Thunderbird won't see this until after I restart Thunderbird. When Thunderbird is restarted, all mail is reset to unread and deleted mail is undone. This is probably because Thunderbird reads the mail from the /var/mail/www-data file, but doesn't update this file. So after restarting, it simply reads this file again, with the new mail and all old mail. This is probably a postfix issue: mail is sent out to existing mail addresses, but cannot be delivered because the receiving mailserver cannot be reached. This results in "Undelivered mail returned to sender". Only one mailserver can be reached: localhost. Because this is a test system, I don't want real customers to receive mail. I've blocked mail ports in UFW to be sure. When opening the returned mail, I can scroll down and then I see the original mail with proper layout. So I can read the mail, see if the proper images are included, and for me that's workable. Having to restart TB to read new mail - I know when new mail arrives, so I know when to restart. Having old mail restored after a restart - not big problem as well. I can delete the mail file if it gets too much. I know how it works, but it would be nice if it worked like normal.

    Read the article

  • Why am I getting a "network is unreachable" error on Ubuntu Server?

    - by jason328
    I'm a completely new to Ubuntu server and am having a hard time connecting the server to the internet. I first ran ping -n 8.8.8.8 connect:Network is unreachable Then I ran ifconfig Link encap:Local Loopback inet addr:127.0.0.1 Mask 255.0.0.0 inet6 addr: ::1/28Scope:host UP LOOPBACK RUNNING MTU:16436 RX packets:192 errors:0 dropped:0 overruns:0 frame:0 TX packets:192 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:15360 (15.2KB) TX bytes:15360 (15.3KB) Here is ouput for sudo lspci -n 00:00.0 0600: 8086:2580 (rev 04) 00:02.0 0300: 8086:2582 (rev 04) 00:1d.0 0c03: 8086:2658 (rev 03) 00:1d.1 0c03: 8086:2659 (rev 03) 00:1d.0 0c03: 8086:265a (rev 03) 00:1d.0 0c03: 8086:265b (rev 03) 00:1d.0 0c03: 8086:265c (rev 03) 00:1e.0 0604: 8086:244e (rev d3) 00:1e.0 0401: 8086:266e (rev 03) 00:1f.0 0601: 8086:2640 (rev 03) 00:1f.0 0101: 8086:2651 (rev 03) 00:1f.0 0c05: 8086:266a (rev 03) 00:0b.0 0200: 8086:1654 (rev 03) lshw-c network returns WARNING: you should run this program as super-user. *-network DISABLED description:Ethernet interface product: NetXtreme BCM5705_2 Gigabit Ethernet vender: Broadcom Corporation physical id:b bus info:pci@0000:0a:0b.0 logical name: eth0 capabilities: bus_master_cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion= 3.121 firmware=5705-v3.18 latency=32 mingnt=64 multicast=yes port=twister pair lsmod code returned this Module Size Used By e100 37213 0 dm_crypt 23125 1 ppdev 17113 0 psmouse 87603 0 snd_intel8x0 38570 0 snd_ac97_codec 134826 1 snd_intel8x0 ac97_bus 12730 1 snd_ac97_codec snd_pcm 97188 2 snd_intel8x0, snd_ac97_codec serio_raw 13211 0 snd_timer 29990 1 snd_pcm snd 78855 4 snd_intel8x0, snd_ac97_codec, snd_pcm,snd_timer soundcore 15091 1 snd snd_page_alloc 18529 2 snd_intel8x0, snd_pcm ext2 73795 1 parport_pc 32866 1 mac_hid 13253 0 lp 17799 0 parport 46562 3 ppdev, parport_pc,lp usbhid 47199 0 hid 99559 1 usbhid tg3 152032 0 i915 468651 1 floppy 70365 0 drm_kms_helper 46978 1 i915 drm 242038 2 i915,drm_kms_helper i2c_algo_bit 13423 1 i915 video 19596 1 i915 Again there is more but it's giving info on the driver itself. I know it works, I've used it. I assume then that my network got disabled when I installed Ubuntu Server. How do I enable it? I checked and the internet cable is connected to the D-link router. I have also used this same computer for internet access when I had Ubuntu Desktop installed so internet does work.

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • JUDCon 2013 Trip Report

    - by reza_rahman
    JUDCon (JBoss Users and Developers Conference) 2013 was held in historic Boston on June 9-11 at the Hynes Convention Center. JUDCon is the largest get together for the JBoss community, has gone global in recent years but has it's roots in Boston. The JBoss folks graciously accepted a Java EE 7 talk from me and actually referenced my talk in their own sessions. I am proud to say this is my third time speaking at JUDCon/the Red Hat Summit over the years (this was the first time on behalf of Oracle). I had great company with many of the rock stars of the JBoss ecosystem speaking such as Lincoln Baxter, Jay Balunas, Gavin King, Mark Proctor, Andrew Lee Rubinger, Emmanuel Bernard and Pete Muir. Notably missing from JUDCon were Bill Burke, Burr Sutter, Aslak Knutsen and Dan Allen. Topics included Java EE, Forge, Arquillian, AeroGear, OpenShift, WildFly, Errai/GWT, NoSQL, Drools, jBPM, OpenJDK, Apache Camel and JBoss Tools/Eclipse. My session titled "JavaEE.Next(): Java EE 7, 8, and Beyond" went very well and it was a full house. This is our main talk covering the changes in JMS 2, the Java API for WebSocket (JSR 356), the Java API for JSON Processing (JSON-P), JAX-RS 2, JPA 2.1, JTA 1.2, JSF 2.2, Java Batch, Bean Validation 1.1, Java EE Concurrency and the rest of the APIs in Java EE 7. I also briefly talked about the possibilities for Java EE 8. The slides for the talk are here: JavaEE.Next(): Java EE 7, 8, and Beyond from reza_rahman Besides presenting my talk, it was great to catch up with the JBoss gang and attend a few interesting sessions. On Sunday night I went to one of my favorite hangouts in Boston - the exalted Middle East Club as Rolling Stone refers to it (other cool spots in an otherwise pretty boring town is "the Church"). As contradictory as it might sound to the uninitiated, the Middle East Club is possibly the best place in Boston to simultaneously get great Middle Eastern (primarily Lebanese) food and great underground metal. For folks with a bit more exposure, this is probably not contradictory at all given bands like Acrassicauda and documentaries like Heavy Metal in Baghdad. Luckily for me they were featuring a few local Thrash metal bands from the greater Boston area. It wasn't too bad considering it was primarily amateur twenty-something guys (although I'm not sure I'm a qualified critic any more since I all but stopped playing about at that age). It's great Boston has the Middle East as an incubator to keep the rock, metal, folk, jazz, blues and indie scene alive. I definitely enjoyed JUDCon/Boston and hope to be part of the conference next year again.

    Read the article

  • Open port in gufw is closed, no incoming connection on deluge

    - by user66987
    I have a problem configuring gufw. I open ports on it, but when i test in deluge it shows as closed. Any help on setting up the firewall would be greatly appreciated. This is the output I get on the firewall in terminal: Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 1346/dnsmasq tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 970/cupsd tcp 0 0 0.0.0.0:55521 0.0.0.0:* LISTEN 17362/python tcp6 0 0 ::1:631 :::* LISTEN 970/cupsd tcp6 0 0 :::55521 :::* LISTEN 17362/python udp 0 0 10.0.0.125:1900 0.0.0.0:* 17362/python udp 0 0 127.0.0.1:1900 0.0.0.0:* 17362/python udp 0 0 0.0.0.0:1900 0.0.0.0:* 17362/python udp 0 0 127.0.0.1:53162 0.0.0.0:* 17362/python udp 0 0 127.0.0.1:53 0.0.0.0:* 1346/dnsmasq udp 0 0 0.0.0.0:68 0.0.0.0:* 1312/dhclient udp 0 0 10.0.0.125:36948 0.0.0.0:* 17362/python udp 0 0 0.0.0.0:37240 0.0.0.0:* 17362/python udp 0 0 10.0.0.125:6771 0.0.0.0:* 17362/python udp 0 0 127.0.0.1:6771 0.0.0.0:* 17362/python udp 0 0 0.0.0.0:6771 0.0.0.0:* 17362/python udp 0 0 10.0.0.125:50034 0.0.0.0:* 17362/python udp 0 0 0.0.0.0:58340 0.0.0.0:* 982/avahi-daemon: r udp 0 0 0.0.0.0:5353 0.0.0.0:* 982/avahi-daemon: r udp 0 0 0.0.0.0:56947 0.0.0.0:* 17362/python udp 0 0 127.0.0.1:57059 0.0.0.0:* 17362/python udp6 0 0 :::49793 :::* 982/avahi-daemon: r udp6 0 0 :::5353 :::* 982/avahi-daemon: r kenneth@kenneth-K53U:~$ sudo ufw status Status: aktive Til Handling Fra --- -------- --- 6881:6891/tcp ALLOW Anywhere 6881:6891/udp ALLOW Anywhere 55521/tcp ALLOW Anywhere 6881:6891/tcp ALLOW Anywhere (v6) 6881:6891/udp ALLOW Anywhere (v6) 55521/tcp ALLOW Anywhere (v6) I also want to be able to use the firewall with linuxdc, so I need other ports open as well. This is connected to the firewall. Because when I turn off the firewall, the port is open. So this is not a problem with my modem. Do I need the firewall? The broadband modem has a hardware firewall. Update: Forgot to add. When my firewall is inactive, it closes ports after a time. So when I use linuxdc, I have to flush iptabels and activating it again. Is this supposed to do this when the firewall is deactivated? Update again: All my ports are closed now, flushing the iptable does not work anymore. I have uninstalled gufw, but still all my ports are closed. And to say it again, this has nothing to do with my broadband modem since it worked when I used windows 7. I need help to open the ports.

    Read the article

  • Forms&Reports upgrade characterset issues

    - by Lukasz Romaszewski
    Hello,This quick post is based on my findings during recent IMC workshops, especially those related to upgrading the Forms 6i/9i/10g applications to Forms 11g platform. The upgrade process itself is pretty straightforward and it basically requires recompiling your Forms application with a latest version of frmcmp tool. For some cases though, especially when you migrate from Forms 6i which is a client-server architecture to a 3-tier web solution (Forms 11g), you need to rewrite some parts of your code to make it run on new platform. The things you need to change range from reimplementing (using webutil library) typical client-site functionality like local IO operation, access to WinAPI, invoking DLLs etc. to changing deprecated or obsolete APIs like RUN_PRODUCT to RUN_REPORT_OBJECT. To automate those changes Oracle provides complete Java API  which allows you to manipulate the code and structure of you modules (JDAPI). To make it even easier we can use Forms Migration Assistant tool (written in Java using JDAPI) which is able to replace all occurrences of old API entries with their 11g equivalents or warn you when the replacement is not possible. You can also add your own replacement definitions in the search_replace.properties file. But you need to be aware of some issues that can be encountered using this tool. First of all if you are using some hard-coded text inside your triggers you may notice that after processing them by the Migration Assistant tool the national characters may be lost. This is due to the fact that you need to explicitly tell Java application (which MA really is) what kind of characterset it should use to read those text properly. In order to do that just add to a script calling MA the following line:  export JAVA_TOOL_OPTIONS=-Dfile.encoding=<JAVA_ISO_ENCODING>  when the particular encoding must match the NLS_LANG in your Forms Builder environment (for example for Polish characterset you need to use ISO-8859-2).Second issue you can encounter related to national charactersets is lack of national symbols in you reports after migration. This can be solved by adding appropriate NLS_LANG entry in your reports environment. Sometimes instead of particular characterset you see "Greek characters" in your reports. This is just default font used by reports engine instead of the one defined in your report. To solve it you must copy fonts definitions from your old environment (e.g. Forms 10g installation) to appropriate directory in new installation (usually AFM folder). For more information about this and other issues please refer to https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=1297012.1at My Oracle Support site. That's all for today, stay tuned for more posts on this topic! Lukasz

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >