Search Results

Search found 8094 results on 324 pages for 'campus solutions'.

Page 139/324 | < Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • Agilist, Heal Thyself!

    - by Dylan Smith
    I’ve been meaning to blog about a great experience I had earlier in the year at Prairie Dev Con Calgary.  Myself and Steve Rogalsky did a session that we called “Agilist, Heal Thyself!”.  We used a format that was new to me, but that Steve had seen used at another conference.  What we did was start by asking the audience to give us a list of challenges they had had when adopting agile.  We wrote them all down, then had everybody vote on the most interesting ones.  Then we split into two groups, and each group was assigned one of the agile challenges.  We had 20 minutes to discuss the challenge, and suggest solutions or approaches to improve things.  At the end of the 20 minutes, each of the groups gave a brief summary of their discussion and learning's, then we mixed up the groups and repeated with another 2 challenges. The 2 groups I was part of had some really interesting discussions, and suggestions: Unfinished Stories at the end of Sprints The first agile challenge we tackled, was something that every single Scrum team I have worked with has struggled with.  What happens when you get to the end of a Sprint, and there are some stories that are only partially completed.  The team in question was getting very de-moralized as they felt that every Sprint was a failure as they never had a set of fully completed stories. How do you avoid this? and/or what do you do when it happens? There were 2 pieces of advice that were well received: 1. Try to bring stories to completion before starting new ones.  This is advice I give all my Scrum teams.  If you have a 3-week sprint, what happens all too often is you get to the end of week 2, and a lot of stories are almost done; but almost none are completely done.  This is a Bad Thing.  I encourage the teams I work with to only start a new story as a very last resort.  If you finish your task look at the stories in progress and see if there’s anything you can do to help before moving onto a new story.  In the daily standup, put a focus on seeing what stories got completed yesterday, if a few days go by with none getting completed, be sure this fact is visible to the team and do something about it.  Something I’ve been doing recently is introducing WIP (Work In Progress) limits while using Scrum.  My current team has 2-week sprints, and we usually have about a dozen or stories in a sprint.  We instituted a WIP limit of 4 stories.  If 4 stories have been started but not finished then nobody is allowed to start new stories.  This made it obvious very quickly that our QA tasks were our bottleneck (we have 4 devs, but only 1.5 testers).  The WIP limit forced the developers to start to pickup QA tasks before moving onto the next dev tasks, and we ended our sprints with many more stories completely finished than we did before introducing WIP limits. 2. Rather than using time-boxed sprints, why not just do away with them altogether and go to a continuous flow type approach like KanBan.  Limit WIP to keep things under control, but don’t have a fixed time box at the end of which all tasks are supposed to be done.  This eliminates the problem almost entirely.  At some points in the project (releases) you need to be able to burn down all the half finished stories to get a stable release build, but this probably occurs less often than every sprint, and there are alternative approaches to achieve it using branching strategies rather than forcing your team to try to get to Zero WIP every 2-weeks (e.g. when you are ready for a release, create a new branch for any new stories, but finish all existing stories in the current branch and release it). Trying to Introduce Agile into a team with previous Bad Agile Experiences One of the agile adoption challenges somebody described, was he was in a leadership role on a team he had recently joined – lets call him Dave.  This team was currently very waterfall in their ALM process, but they were about to start on a new green-field project.  Dave wanted to use this new project as an opportunity to do things the “right way”, using an Agile methodology like Scrum, adopting TDD, automated builds, proper branching strategies, etc.  The problem he was facing is everybody else on the team had previously gone through an “Agile Adoption” that was a horrible failure.  Dave blamed this failure on the consultant brought in previously to lead this agile transition, but regardless of the reason, the team had very negative feelings towards agile, and was very resistant to trying it out again.  Dave possibly had the authority to try to force the team to adopt Agile practices, but we all know that doesn’t work very well.  What was Dave to do? Ultimately, the best advice was to question *why* did Dave want to adopt all these various practices. Rather than trying to convince his team that these were the “right way” to run a dev project, and trying to do a Big Bang approach to introducing change.  He would be better served by identifying problems the team currently faces, have a discussion with the team to get everybody to agree that specific problems existed, then have an open discussion about ways to address those problems.  This way Dave could incrementally introduce agile practices, and he doesn’t even need to identify them as “agile” practices if he doesn’t want to.  For example, when we discussed with Dave, he said probably the teams biggest problem was long periods without feedback from users, then finding out too late that the software is not going to meet their needs.  Rather than Dave jumping right to introducing Scrum and all it entails, it would be easier to get buy-in from team if he framed it as a discussion of existing problems, and brainstorming possible solutions.  And possibly most importantly, don’t try to do massive changes all at once with a team that has not bought-into those changes.  Taking an incremental approach has a greater chance of success. I see something similar in my day job all the time too.  Clients who for one reason or another claim to not be fans of agile (or not ready for agile yet).  But then they go on to ask me to help them get shorter feedback cycles, quicker delivery cycles, iterative development processes, etc.  It’s kind of funny at times, sometimes you just need to phrase the suggestions in terms they are using and avoid the word “agile”. PS – I haven’t blogged all that much over the past couple of years, but in an attempt to motivate myself, a few of us have accepted a blogger challenge.  There’s 6 of us who have all put some money into a pool, and the agreement is that we each need to blog at least once every 2-weeks.  The first 2-week period that we miss we’re eliminated.  Last person standing gets the money.  So expect at least one blog post every couple of weeks for the near future (I hope!).  And check out the blogs of the other 5 people in this blogger challenge: Steve Rogalsky: http://winnipegagilist.blogspot.ca Aaron Kowall: http://www.geekswithblogs.net/caffeinatedgeek Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com (note: site not available yet.  should be shortly or he owes me some money!)

    Read the article

  • Delivering the Integrated Portal Experience!

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Guest post by Richard Maldonado, Principal Product Manager, Oracle WebCenter Portal Organizations are still struggling to standardize on a user interaction platform which can meet the needs of all their target audiences.  This has not only resulted in inefficient and inconsistent experiences for their users, but it also creates inefficiencies (productivity and costs) for the departments that manage the applications and information systems.  Portals have historically been the unifying platform that provide IT with a common interface which can securely surface the most relevant interactions for a given user and/or group of users.  However, organizations have found that the technologies available have either not provided the flexibility necessary to address all of their use cases, or they rely too much on IT resources to manage, maintain, and evolve.  Empowering  the Business Groups The core issue that IT departments face with delivering portal experiences is having enough resources to respond and address the influx of requirements which come in from the business.  Commonly, when a business group wants a new portal site established for their group, they will submit a request to the IT dept, the IT dept then assigns a resource to an administrator and/or developer to build.  Unfortunately, this approach is not scalable, it can be a time consuming activity which requires significant interaction between the business owner and the IT resource.  A modern user interaction platforms should empower the business groups by providing them tools which they can use to build and manage the portal experiences without the need for IT's involvement.  And because business groups rarely have technical resources (developers) on staff, the tools must be easy enough that virtually any business user could use.  In addition, the tool must be powerful enough to allow them to build the experience that they need, things such as creating a whole new portal, add/manage page and page hierarchy, manage user/group access, add/modify components within the page, etc.  This balance between ease-of-use and flexibility is key to the successful adoption of tools which will ultimately reduce the burden on IT, respond to the needs of the business, and deliver high-value experiences for the users.  Ready or Not, Here They Come: Smartphones and Tablets Recently, several studies have highlighted that smartphone and tablet-style devices have overtaken PC's in both sales and usage.  This shift is further driving organizations to revaluate how they're delivering data, information, and applications to their users.  Users are expecting to get the same level of access and interaction, but in a ways which are optimized for the capabilities of the device that they are using.  Expect More With the ever growing number of new IT projects and flat/shrinking budgets, organizations are looking for comprehensive solutions which can deliver integrated web experiences that are tailored for the users and optimized for mobile devices.  Piecing together a number of point solutions is no longer an option.  A modern portal technology should not only address the traditional needs of integrating and surfacing back-end applications/information, but it should enable the business through easy-to-use tools and accelerate the delivery of mobile optimized experiences.   v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} WebCenter in Action Series: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter Featuring Qualcomm & Keste 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 12.00 Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-fareast- mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • Which network protocol to use for lightweight notification of remote apps?

    - by Chris Thornton
    I have this situation.... Client-initiated SOAP 1.1 communication between one server and let's say, tens of thousands of clients. Clients are external, coming in through our firewall, authenticated by certificate, https, etc.. They can be anywhere, and usually have their own firewalls, NAT routers, etc... They're truely external, not just remote corporate offices. They could be in a corporate/campus network, DSL/Cable, even Dialup. Client uses Delphi (2005 + SOAP fixes from 2007), and the server is C#, but from an architecture/design standpoint, that shouldn't matter. Currently, clients push new data to the server and pull new data from the server on 15-minute polling loop. The server currently does not push data - the client hits the "messagecount" method, to see if there is new data to pull. If 0, it sleeps for another 15 min and checks again. We're trying to get that down to 7 seconds. If this were an internal app, with one or just a few dozen clients, we'd write a cilent "listener" soap service, and would push data to it. But since they're external, sit behind their own firewalls, and sometimes private networks behind NAT routers, this is not practical. So we're left with polling on a much quicker loop. 10K clients, each checking their messagecount every 10 seconds, is going to be 1000/sec messages that will mostly just waste bandwidth, server, firewall, and authenticator resources. So I'm trying to design something better than what would amount to a self-inflicted DoS attack. I don't think it's practical to have the server send soap messages to the client (push) as this would require too much configuration at the client end. But I think there are alternatives that I don't know about. Such as: 1) Is there a way for the client to make a request for GetMessageCount() via Soap 1.1, and get the response, and then perhaps, "stay on the line" for perhaps 5-10 minutes to get additional responses in case new data arrives? i.e the server says "0", then a minute later in response to some SQL trigger (the server is C# on Sql Server, btw), knows that this client is still "on the line" and sends the updated message count of "5"? 2) Is there some other protocol that we could use to "ping" the client, using information gathered from their last GetMessageCount() request? 3) I don't even know. I guess I'm looking for some magic protocol where the client can send a GetMessageCount() request, which would include info for "oh by the way, in case the answer changes in the next hour, ping me at this address...". Also, I'm assuming that any of these "keep the line open" schemes would seriously impact the server sizing, as it would need to keep many thousands of connections open, simultaneously. That would likely impact the firewalls too, I think. Is there anything out there like that? Or am I pretty much stuck with polling? TIA, Chris

    Read the article

  • Which network protocol to use for lightweight notification of remote apps (Delphi 2005)

    - by Chris Thornton
    I have this situation.... Client-initiated SOAP 1.1 communication between one server and let's say, tens of thousands of clients. Clients are external, coming in through our firewall, authenticated by certificate, https, etc.. They can be anywhere, and usually have their own firewalls, NAT routers, etc... They're truely external, not just remote corporate offices. They could be in a corporate/campus network, DSL/Cable, even Dialup. Currently, clients push new data to the server and pull new data from the server on 15-minute polling loop. The server currently does not push data - the client hits the "messagecount" method, to see if there is new data to pull. If 0, it sleeps for another 15 min and checks again. We're trying to get that down to 7 seconds. If this were an internal app, with one or just a few dozen clients, we'd write a cilent "listener" soap service, and would push data to it. But since they're external, sit behind their own firewalls, and sometimes private networks behind NAT routers, this is not practical. So we're left with polling on a much quicker loop. 10K clients, each checking their messagecount every 10 seconds, is going to be 1000/sec messages that will mostly just waste bandwidth, server, firewall, and authenticator resources. So I'm trying to design something better than what would amount to a self-inflicted DoS attack. I don't think it's practical to have the server send soap messages to the client (push) as this would require too much configuration at the client end. But I think there are alternatives that I don't know about. Such as: 1) Is there a way for the client to make a request for GetMessageCount() via Soap 1.1, and get the response, and then perhaps, "stay on the line" for perhaps 5-10 minutes to get additional responses in case new data arrives? i.e the server says "0", then a minute later in response to some SQL trigger (the server is C# on Sql Server, btw), knows that this client is still "on the line" and sends the updated message count of "5"? 2) Is there some other protocol that we could use to "ping" the client, using information gathered from their last GetMessageCount() request? 3) I don't even know. I guess I'm looking for some magic protocol where the client can send a GetMessageCount() request, which would include info for "oh by the way, in case the answer changes in the next hour, ping me at this address...". Also, I'm assuming that any of these "keep the line open" schemes would seriously impact the server sizing, as it would need to keep many thousands of connections open, simultaneously. That would likely impact the firewalls too, I think. Is there anything out there like that? Or am I pretty much stuck with polling? TIA, Chris

    Read the article

  • A Web exception occurred because an HTTP 503 - ServiceUnavailable response was received from Unknown

    - by Dai
    As far as I can tell my Exchange 2010 Mailbox and Client Access server is working fine except for Outlook Anywhere. I fired up the Exchange Connectivity Tester and ran it against my server and I get this report: Part 5 Testing HTTP Authentication Methods for URL https://mail.contoso.com/rpc/rpcproxy.dll?server6.corp.contoso.com:6002. The HTTP authentication test failed. Additional details: A Web exception occurred because an HTTP 503 - ServiceUnavailable response was received from Unknown. When I do a search for "ServiceUnavailable response was received from Unknown." I get only a couple of relevant results, including a 22k-view Exchange Forum thread, but none of the solutions discussed help. There is nothing of relevance in the server's Event Log. mail.contoso.com is the public domain name of the CAS/MB/HT server. server6.corp.contoso.com is the internal domain name of the server.

    Read the article

  • What is the syntax for Dsynchronize "exclude filter" for files 's full path to exclude bin\* and obj\* of a C# solution?

    - by Nam G. VU
    Dsynchronize is a great free tool to sync two folders. I'm using it to sync two solutions checked out from two different TFS Team Collection. I want to exclude the following: All files in bin folder All files in obj folder I tried bin\*; obj\* but it doesn't work. How can I do that? ps. Though, trying *.g.* and *cache* help to exclude the files whose names match with the filter. It seems the filter is applied to the file name only NOT the full path of the file

    Read the article

  • Excel error "This workbook contains Excel 4.0 macros or Excel 5.0 modules"

    - by James
    I have a workbook that was protected via the Protect Workbook feature. It was sent to someone else to modify. When they sent it back, it was unprotected and when I try to reprotect it I get this error, "This workbook contains Excel 4.0 macros or Excel 5.0 modules. If you would like to password protect or restrict permission to this document, you need to remove these macros." I looked and there are no new macros in the edited file. The original file contained the same macros and it was able to be write protected, so I'm not sure why the modified file is having a problem. What are common causes and solutions for this error and does it make sense for the modified file to have the error when the original doesn't?

    Read the article

  • Sharepoint 2010 - One or more services have started or stopped unexpectedly

    - by Mr Shoubs
    Does anyone know why I keep getting the following message in Sharepoint 2010 review problems and solutions list: The following services are managed by SharePoint, but their running state does not match what SharePoint expects: SPAdminV4. This can happen if a service crashes or if an administrator starts or stops a service using a non-SharePoint interface. If SharePoint-managed services do not match their expected running state, SharePoint will be unable to correctly distribute work to the service. Failing Services: SPTimerService (SPTimerV4) I assume this is referring to Sharepoint 2010 Timer Service but I can only see this in the services msc (and it is running), not in the Application Management > Manage services on server. Does anyone know why this keeps occurring and how I can stop it? Note - I looked in the event log and all I can see is that same error.

    Read the article

  • VirtualBox - bridged adapter settings fail in Windows 7 host

    - by pcampbell
    Consider a Windows 7 host where the VirtualBox guest is configured to use Bridged Adapter. An exception is raised when starting this guest machine: Failed to open/create the internal network HostInterfaceNetworking (VERR_SUPDRV_COMPONENT_NOT_FOUND) Result Code: E_FAIL (0x80004005) What resolution is there to this problem for Bridged Adapters in VirtualBox? The solutions attempted: changed Adapter Type to all available choices. No changes. uninstall VirtualBox, reboot host, re-install VirtualBox. No change in behaviour. edited the machine's .xml file, wiping out all <Network> <Adapter> nodes. Had VirtualBox re-create those nodes. No change. Host Details Oracle VirtualBox 3.2.4 (release 62467) Windows 7 x86

    Read the article

  • Unable to make admin interface connection to server when moving a mailbox.

    - by TheCodeMonk
    I have an existing Exchange 2003 server running on Windows Server 2003. I am in the process of replacing our current server infrastructure and virtualizing it all in Hyper-V. I have Essential Business Server 2008 installed in 3 separate VMs and running. Everything seems to be working fine so far. I am now trying to migrate my exchange mailboxes over to the new exchange server in the messaging server and every time I try I get this error: MapiExceptionNetworkError: Unable to make admin interface connection to server. (hr=0x80040115, ec=-2147221227) I have done some searching and found solutions like adding the computer to the exchange domain servers groups and install group, also making sure the user logged into the new server is in the proper groups. I also saw a solution in making sure that any unused NICs are disabled. I've tried all that to no avail.

    Read the article

  • Networking with Windows 7

    - by Alix Axel
    I got several computers connected to my home wireless network and I want to make use of some of the features of Windows 7 for home networks but I can't seem to get them working: How do I keep files and folders in sync between specific computers? I'm not talking about Live Sync. How can I stream to Windows Media Player from another computer? I would appreciate if someone could provide me some links / solutions that address my needs. Thanks! To anyone who thinks this question is a duplicate and wants to close it please bare in mind the following: I'm not looking for additional software, I know I can use Live Sync, Dropbox and so on but I'm asking this: how do I configure Windows 7 to sync files between my home network - no Internet required! This has something to do with shared folders and offline files in Windows 7, but I can't get it to work. PS: Please merge with this question: http://superuser.com/questions/139763/networking-with-windows-7

    Read the article

  • How to add a local file to trusted zone in IE8?

    - by Raghu Dodda
    I want to add a file on my local drive (C:\something.html) to the Trusted Zone in IE8 (my OS is Windows Server 2003). The Add Sites Dialog box, does not seem to take entries for files on the local drive. I have tried: file://C:\something.html file:\\localhost\c$\something.html I have seen other solutions (on superuser and elsewhere) such Mark of Web, that allow your local file to be treated as if it were part of the internet zone, but I want to add my file to the Trusted Zone. How can I do this? Thanks.

    Read the article

  • Microsoft Office documents collaboration - Open Source alternative

    - by Saggi Malachi
    I am looking for a good solution to collaborate on Microsoft Office documents, we currently just edit directly on a Samba share but it's one big mess because sometimes people leave the office with their laptops while docs are open so swap files remain there and then you nobody is sure what's going on. Is there any good and simple open source solution based on Linux? I've tried Alfresco but it is much more than what I need, we got an internal wiki for most collaboration and I just need some solution for the stuff we need to do in Microsoft Office (mostly Excel files, the rest is in the wiki) EDIT: Some more info as requested - we are very small group, 4 full time employees and a few freelancers. The best idea I've got so far is just managing it in a subversion repository with a Lock-Modify-Lock policy but I'd love to hear about better solutions. Thanks!

    Read the article

  • Remove hardware from "safely remove hardware" list on Windows 7

    - by ricsmania
    Hi, I have a USB wifi card (D-Link DWA-125) on a Windows 7 x64 computer. The problem is that the card appears on the "safely remove hardware" list, but I don't ever want to unplug it, so I would like to remove it from the list. So far I have only found solutions that mess with the registry (and don't really work properly) or that substitute the safely remove hardware with a third-party software. Or worse, hide the icon altogether. Yes, I do need the icon from USB drives and phones. So, is there a "clean" way to remove it from the list? Thanks.

    Read the article

  • How to resolve 0xc000007b error opening Adobe After Effects?

    - by user225170
    I have installed Adobe CS6 on Windows 8 and get the error "0xc000007b" every time I open open Adobe After Effects. All other Adobe software, including Photoshop and Premiere Pro, work perfectly. I have looked online extensively but have not found any solutions. What can I try/do to resolve this error? OS: Windows 8 (64 bit) - CPU: Intel Core i3-3110M 2,40GHz - RAM: 6GB DD3 - FX: AMD Radeon HD 7670M When I open Adobe After Effects CS6 with Dependency Walker, this message appears : "Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. Error: Modules with different CPU types were found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module." What can I do to resolve this error?

    Read the article

  • Troubleshooting PXE-E11: ARP Timeout error under VMware ESXi 4 and HPSA 7.8

    - by warren
    I am running HP Server Automation v7.8 in a lab environment on VMware ESXi, managed via vSphere 4. On the same host I have several small VMs for OS provisioning testing (512MB RAM, 10GB hdd, one NIC on the same vSwitch that HPSA is running on). DHCP is configured to hand-out addresses in the 192.168.10.151-200 range. On boot of the VM, it receives an IP (eg 192.168.10.198) within seconds. However, after it receives its IP, a PXE-E11: ARP Timeout error occurs in trying to boot from the DHCP server. I do not know if this is a HPSA-specific error, as I have seen reports of the PXE-E11 error on various forums. Proposed solutions I have seen so far (changing VLAN settings, for example) have not been applicable to my environment. Are there any pointers/troubleshooting steps that can be taken to resolve this?

    Read the article

  • Register a dll in Windows Server 2003

    - by Karee
    I'm trying to register some DLLs for a particular application but having some problems. The DLLs are stored in the application folder in the D: drive. I run cmd.exe, go to the D: drive and run regsvr32, but it gives me the error: "The was loaded, but the DllRegisterServer entry point was not found. This file cannot be registered." After searching for other solutions, I also tried copying the DLLs into the C:\WINDOWS\system32 folder and running regsvr32 directly there but that didn't work either. It gave me the same error. I'm not very familiar with Windows Administration, so I may need detailed instructions/explanations. Any help would be greatly appreciated! Thank you very much!

    Read the article

  • Email deliverability -- Whitelist solution or Email delivery service?

    - by JoefrshnJoeclean
    Hey Folks -- our company is encountering the same recurring problem - email deliverability. A lot of our emails are still getting trapped in yahoo and gmail spam filters. We followed yahoo's best practices guide as well as tips Ive found on serverfault. (setting up DKIM, SPF) And even took the Email Server Test (http://www.allaboutspam.com/email-server-test/) Now my question is: has anyone had success using whitelist solutions like goodmail or EmailReach? Alternatively, Im beginning to think that going with a email delivery service like Mailchimp will save me the headache and future stress of managing our email lists. So whitelist solution or just fork up the money and send via an email delivery service? Thanks!

    Read the article

  • .htaccess Permission denied. Unable to check htaccess file

    - by Josh
    Hi, I have a strange problem when adding a sub-domain to our virtual server. I have done similar sub-domains before and they have worked fine. When I try to access the sub-domain I get an 403 Forbidden error. I checked the error logs and have the following error: pcfg_openfile: unable to check htaccess file, ensure it is readable I've searched Google and could only find solutions regarding file and folder permissions, that I have checked and the solution isn't solved. I also saw problems with Frontpage Extensions, but that's not installed on the server. Edit Forgot to say that there isn't a .htaccess file in the directory of the sub-domain

    Read the article

  • Shared Excel WorkBook is locked by another user

    - by Simone
    I’ve been trying everything; this is the last chance I have. I moved folders and files from an old Windows Server 2003 File Server to a new FS (Win Server 2008 R2) with DFS and ABE enabled. Now, a specific Shared Excel file is driving me crazy, out of a sudden, lots of times per day, users are getting the following error while opening that file: Filename.xlsx is locked for editing by ‘another user’. Open ‘Read-Only’ or, click ‘Notify’ to open.. I’ve already followed this, with no joy: http://blogs.technet.com/b/the_microsoft_excel_support_team_blog/archive/2012/05/14/the-definitive-locked-file-post.aspx In any case, I strongly think this is not client-related, since it never gave that problem in the past with Windows Server 2003. I’ve found and followed many other solutions, nothing. The users are all utilizing Office 2010 on Windows 7 machines, besides a few users who are still on Windows XP machines. I appreciate any help, thank you!

    Read the article

  • Shared Excel WorkBook is locked by another user

    - by Simone
    I’ve been trying everything; this is the last chance I have. I moved folders and files from an old Windows Server 2003 File Server to a new FS (Win Server 2008 R2) with DFS and ABE enabled. Now, a specific Shared Excel file is driving me crazy, out of a sudden, lots of times per day, users are getting the following error while opening that file: Filename.xlsx is locked for editing by ‘another user’. Open ‘Read-Only’ or, click ‘Notify’ to open.. I’ve already followed this, with no joy: http://blogs.technet.com/b/the_microsoft_excel_support_team_blog/archive/2012/05/14/the-definitive-locked-file-post.aspx In any case, I strongly think this is not client-related, since it never gave that problem in the past with Windows Server 2003. I’ve found and followed many other solutions, nothing. The users are all utilizing Office 2010 on Windows 7 machines, besides a few users who are still on Windows XP machines. I appreciate any help, thank you!

    Read the article

< Previous Page | 135 136 137 138 139 140 141 142 143 144 145 146  | Next Page >