Search Results

Search found 7271 results on 291 pages for 'master notes'.

Page 47/291 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • A Quick Primer on SharePoint Customization

    - by PeterBrunone
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} This one goes out to all the people who have been asked to change the way a SharePoint site looks.  Management wants to know how long it will take, and you can whip that out by tomorrow, right?  If you don't have time to prepare a treatise on what's involved, or if you just want to lend some extra weight to your case by quoting a blogger who was an MVP for seven years, then dive right in; this post is for you. There are three main components of SharePoint visual customization:   1)       Theme – A theme encompasses all the standardized text formatting and coloring (borders, fonts, etc), including the background images of various sections. All told, there could be around 50 images involved, and a few hundred CSS (style) classes.  Installing a theme once it’s been created is no great feat.  Given the number of pieces, of course, creating a new theme could take anywhere from a day to a week… once decisions have been made about the desired appearance. 2)      Master Page – A master page provides the framework for page layout.  This includes all the top and side menus, where content shows up, et cetera.  Master pages have been around for a long time in ASP.NET (Microsoft’s web development platform), and they do require some .NET programming knowledge.  Beyond that, in SharePoint, there are a few dozen controls which the system expects find on a given page.  They’re not all used at once, but if they’re not there when they’re needed, chaos ensues.  Estimating a custom master page is difficult, as it depends on the level of customization.  I’ve been on projects where I was brought in simply to fix some problems and add a few finishing touches, and it took 2-3 weeks.  Master page customization requires a large amount of testing time to make sure that the HTML, JavaScript, CSS, and control placement all work well together. 3)      Individual page layout – Each page (ideally) uses a master page for its template, but within the content areas defined by the master page, web parts can be added, removed, and configured from within the browser.  The wireframe that Brent provided could most likely be completed simply by manipulating the content on the home page in this fashion, and we had allowed about a day of effort for the task.  If needed, further functionality can be provided by an experienced ASP.NET developer; custom forms are a common example.  This of course is a bit more in-depth than simple content manipulation and could take several days per page (or more; there’s really no way to quantify this without a set of requirements).   That’s basically it.  To recap:  Fonts and coloring are done with themes, and can take anywhere from a day to a week to create (not counting creative time); required technical skills include HTML, CSS, and image manipulation.  Templated layout is done with master pages, and generally requires a developer familiar with both ASP.NET and SharePoint in particular; it can have far-reaching consequences depending on the complexity of the changes, and could add weeks or months to a project.  Page layout can be as simple as content manipulation in the web browser, taking a few hours per page, or it can involve more detail, like custom forms, and can require programming expertise and significantly more development time.

    Read the article

  • Can I use swank-clojure with the clojure 1.2 master branch?

    - by Rob
    I'm happily using swank-clojure, installed via elpa. But I'd like to do some work with deftype, defprotocol, etc., which aren't aren't available in clojure 1.1. To use my own class paths, I'm using the excellent suggestion by Rick Moynihan in the stackoverflow question about setting custom classpaths, which was to set up a script like: #!/bin/bash java -server -cp "./lib/*":./src clojure.main -e "(do (require 'swank.swank) (swank.swank/start-repl))" And that works swimmingly if the clojure jar file in lib is 1.1, but with 1.2, it blows up: Exception in thread "main" java.lang.NoSuchMethodError: clojure.lang.RestFn.<init>(I)V (macroexpand.clj:1) at clojure.lang.Compiler.eval(Compiler.java:5274) at clojure.lang.Compiler.load(Compiler.java:5663) at clojure.lang.RT.loadResourceScript(RT.java:330) at clojure.lang.RT.loadResourceScript(RT.java:321) at clojure.lang.RT.load(RT.java:399) at clojure.lang.RT.load(RT.java:371) at clojure.core$load__5663$fn__5671.invoke(core.clj:4255) at clojure.core$load__5663.doInvoke(core.clj:4254) at clojure.lang.RestFn.invoke(RestFn.java:409) ...and many, many more So is there some magical incantation to make this work, or is clojure 1.2 compatibility not there yet?

    Read the article

  • Master Details and collectionViewSource in separate views cannot make it work.

    - by devnet247
    Hi all, I really cannot seem to make/understand how it works with separate views It all works fine if a bundle all together in a single window. I have a list of Countries-Cities-etc... When you select a country it should load it's cities. Works So I bind 3 listboxes successfully using collection sources and no codebehind more or less (just code to set the datacontext and selectionChanged). you can download the project here http://cid-9db5ae91a2948485.skydrive.live.com/self.aspx/PublicFolder/2MasterDetails.zip <Window.Resources> <CollectionViewSource Source="{Binding}" x:Key="cvsCountryList"/> <CollectionViewSource Source="{Binding Source={StaticResource cvsCountryList},Path=Cities}" x:Key="cvsCityList"/> <CollectionViewSource Source="{Binding Source={StaticResource cvsCityList},Path=Hotels}" x:Key="cvsHotelList"/> </Window.Resources> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition/> </Grid.RowDefinitions> <TextBlock Grid.Column="0" Grid.Row="0" Text="Countries"/> <TextBlock Grid.Column="1" Grid.Row="0" Text="Cities"/> <TextBlock Grid.Column="2" Grid.Row="0" Text="Hotels"/> <ListBox Grid.Column="0" Grid.Row="1" Name="lstCountries" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Source={StaticResource cvsCountryList}}" DisplayMemberPath="Name" SelectionChanged="OnSelectionChanged"/> <ListBox Grid.Column="1" Grid.Row="1" Name="lstCities" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Source={StaticResource cvsCityList}}" DisplayMemberPath="Name" SelectionChanged="OnSelectionChanged"/> <ListBox Grid.Column="2" Grid.Row="1" Name="lstHotels" ItemsSource="{Binding Source={StaticResource cvsHotelList}}" DisplayMemberPath="Name" SelectionChanged="OnSelectionChanged"/> </Grid> Does not work I am trying to implement the same using a view for each eg(LeftSideMasterControl -RightSideDetailsControls) However I cannot seem to make them bind. Can you help? I would be very grateful so that I can understand how you communicate between userControls You can download the project here. http://cid-9db5ae91a2948485.skydrive.live.com/self.aspx/PublicFolder/2MasterDetails.zip I have as follows LeftSideMasterControl.xaml <Grid> <ListBox Name="lstCountries" SelectionChanged="OnSelectionChanged" DisplayMemberPath="Name" ItemsSource="{Binding Countries}"/> </Grid> RightViewDetailsControl.xaml MainView.xaml <CollectionViewSource Source="{Binding}" x:Key="cvsCountryList"/> <CollectionViewSource Source="{Binding Source={StaticResource cvsCountryList},Path=Cities}" x:Key="cvsCityList"/> </UserControl.Resources> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition/> <ColumnDefinition Width="3*"/> </Grid.ColumnDefinitions> <Views:LeftViewMasterControl x:Name="leftSide" Margin="5" Content="{Binding Source=}"/> <GridSplitter Grid.Column="0" Grid.Row="0" Background="LightGray"/> <Views:RightViewDetailsControl Grid.Column="1" x:Name="RightSide" Margin="5"/> </Grid> ViewModels public class CountryListVM : ViewModelBase { public CountryListVM() { Countries = new ObservableCollection<CountryVM>(); } public ObservableCollection<CountryVM> Countries { get; set; } private RelayCommand _loadCountriesCommand; public ICommand LoadCountriesCommand { get { return _loadCountriesCommand ?? (_loadCountriesCommand = new RelayCommand(x => LoadCountries(), x => CanLoadCountries)); } } private static bool CanLoadCountries { get { return true; } } private void LoadCountries() { var countryList = Repository.GetCountries(); foreach (var country in countryList) { Countries.Add(new CountryVM { Name = country.Name }); } } } public class CountryVM : ViewModelBase { private string _name; public string Name { get { return _name; } set { _name = value; OnPropertyChanged("Name"); } } } public class CityListVM : ViewModelBase { private CountryVM _selectedCountry; public CityListVM(CountryVM country) { SelectedCountry = country; Cities = new ObservableCollection<CityVM>(); } public ObservableCollection<CityVM> Cities { get; set; } public CountryVM SelectedCountry { get { return _selectedCountry; } set { _selectedCountry = value; OnPropertyChanged("SelectedCountry"); } } private RelayCommand _loadCitiesCommand; public ICommand LoadCitiesCommand { get { return _loadCitiesCommand ?? (_loadCitiesCommand = new RelayCommand(x => LoadCities(), x => CanLoadCities)); } } private static bool CanLoadCities { get { return true; } } private void LoadCities() { var cities = Repository.GetCities(SelectedCountry.Name); foreach (var city in cities) { Cities.Add(new CityVM() { Name = city.Name }); } } } public class CityVM : ViewModelBase { private string _name; public string Name { get { return _name; } set { _name = value; OnPropertyChanged("Name"); } } } Models ========= public class Country { public Country() { Cities = new ObservableCollection<City>(); } public string Name { get; set; } public ObservableCollection<City> Cities { get; set; } } public class City { public City() { Hotels = new ObservableCollection<Hotel>(); } public string Name { get; set; } public ObservableCollection<Hotel> Hotels { get; set; } } public class Hotel { public string Name { get; set; } }

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Join us at the Gartner MDM Summit in LA on April 4-5

    - by Dain C. Hansen
    This year, we're proud to announce that Oracle is a Platinum sponsor of the Gartner Master Data Management Summit this April 4 – 5, 2012 in Los Angeles. The event will be a follow-on event from the Gartner BI Summit, so if you already attending that event, stick around on Wednesday and Thursday and don't miss it. Especially, don't miss our key session at 9:30 AM on Thursday April 4th, "Brace for Impact: Key Trends in Master Data Management" with Ford Goodman and Dain Hansen. Master Data Management helps organizations perform better by creating a single coherent version of customers, products and suppliers. But how do you get started? And if you've laid a foundation, how do you become world-class? Designed to address all MDM maturity levels, Gartner Master Data Management Summit delivers the tools, technologies and best practices to help you take control of your master data and dramatically improve business performance. Attendees will have the opportunity to meet with Oracle experts in a variety of sessions, including demonstrations during the showcase receptions. Oracle Customer Case Study and Solution Provider Session Oracle Solution Showcase Receptions Oracle Face-to-Face Meetings Hot topics to be covered: Forecasting key trends shaping MDM Building a business-driven MDM program Assessing MDM maturity Creating the MDM organization Evolving to multidomain MDM Learn more about this event, or to register, click here

    Read the article

  • How should I share variables between instances/classes?

    - by tesselode
    I'm making a game using LOVE, so everything is programmed in Lua. I've been experimenting with using classes and object orientation recently. I've found out that a nice system to use is having most of the game's code in different classes, and having a table of instances with all of the instances of any class in it. This way, I can go through every instance of every class and update and draw it by calling the same function. There is a problem, though. Let's say I have an instance of a player with variables for health and recharge time of a weapon. I also have a master instance which is responsible for drawing the HUD. How can I tell the master instance what the player's health is? Bad solutions: Assuming that the player instance will always have the same position in the table - that can be easily changed. Using global variables. Global variables are evil. Have the master instance outside of the instances table, and have the player set variables inside the master instance, which it then uses for HUD drawing. This is really bad because now I have to make a duplicate of every variable the master instance needs. What is the proper, standard way of sharing variables between instances? Do I need to change the way I keep track of instances?

    Read the article

  • what is this juju status ERROR state

    - by JUAN CABALLERO
    after i do a juju bootstrap i wait until cloud init is finished. i get no juju and the following errors. ERROR state/api: websocket.Dial wss://b4exj.master:17070/: dial tcp 198.105.244.240:17070: connection timed out ERROR state/api: websocket.Dial wss://b4exj.master:17070/: dial tcp 198.105.244.240:17070: connection timed out now let me add that the b4exj.master does not reside at 198.105.244.240:17070 but at 10.x.x.x this is in ubuntu 12.04.4 MAAS 1.4 and juju 1.18 all 64bit non VM

    Read the article

  • How to make linq master detail query for 0..n relationship?

    - by JK
    Given a classic DB structure of Orders has zero or more OrderLines and OrderLine has exactly one Product, how do I write a linq query to express this? The output would be OrderNumber - OrderLine - Product Name Order-1 null null // (this order has no lines) Order-2 1 Red widget I tried this query but is not getting the orders with no lines var model = (from po in Orders from line in po.OrderLines select new { OrderNumber = po.Id, OrderLine = line.LineNumber, ProductName = line.Product.ProductDescription, } ) I think that the 2nd from is limiting the query to only those that have OrderLines, but I dont know another way to express it. LINQ is very non-obvious if you ask me!

    Read the article

  • Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Pig and Pig Latin in Big Data Story. In this article we will understand what is Sqoop and Zookeeper in Big Data Story. There are two most important components one should learn when learning about interacting with Hadoop – Sqoop and Zookper. What is Sqoop? Most of the business stores their data in RDBMS as well as other data warehouse solutions. They need a way to move data to the Hadoop system to do various processing and return it back to RDBMS from Hadoop system. The data movement can happen in real time or at various intervals in bulk. We need a tool which can help us move this data from SQL to Hadoop and from Hadoop to SQL. Sqoop (SQL to Hadoop) is such a tool which extract data from non-Hadoop data sources and transform them into the format which Hadoop can use it and later it loads them into HDFS. Essentially it is ETL tool where it Extracts, Transform and Load from SQL to Hadoop. The best part is that it also does extract data from Hadoop and loads them to Non-SQL (or RDBMS) data stores. Essentially, Sqoop is a command line tool which does SQL to Hadoop and Hadoop to SQL. It is a command line interpreter. It creates MapReduce job behinds the scene to import data from an external database to HDFS. It is very effective and easy to learn tool for nonprogrammers. What is Zookeeper? ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. In other words Zookeeper is a replicated synchronization service with eventual consistency. In simpler words – in Hadoop cluster there are many different nodes and one node is master. Let us assume that master node fails due to any reason. In this case, the role of the master node has to be transferred to a different node. The main role of the master node is managing the writers as that task requires persistence in order of writing. In this kind of scenario Zookeeper will assign new master node and make sure that Hadoop cluster performs without any glitch. Zookeeper is the Hadoop’s method of coordinating all the elements of these distributed systems. Here are few of the tasks which Zookeepr is responsible for. Zookeeper manages the entire workflow of starting and stopping various nodes in the Hadoop’s cluster. In Hadoop cluster when any processes need certain configuration to complete the task. Zookeeper makes sure that certain node gets necessary configuration consistently. In case of the master node fails, Zookeepr can assign new master node and make sure cluster works as expected. There many other tasks Zookeeper performance when it is about Hadoop cluster and communication. Basically without the help of Zookeeper it is not possible to design any new fault tolerant distributed application. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Big Data Analytics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • monitor multiple work repositories in ODI11g EM

    - by tina.wang
    when you create a domain, by default it will let you specify master/work repository information. This work repository is automatically configured and be directly monitored in EM But your master repository may contain multiple work repositories, how to let EM monitor all them. 1)these work repositories must have been registered in your master repository 2)in weblogic console, generate generic data source for every work repository, eg: jdbc/mySecondWork 3)in odiconsole, create new repository connection for the every work repository, master jndi information is jdbc/odiMasterRepository by default OK, now you can see the work repository status is configured. Btw, there is a bug when the work repository is execution type.

    Read the article

  • Can I make ssh tell me which control file it would use for multiplexing?

    - by Ryan Thompson
    I am using the following options in my ~/.ssh/config in order to enable connection multiplexing: ControlMaster auto ControlPath ~/.ssh/control/master-%r@%h:%p However, this has the annoying problem that the first shell to connect to a particular server must be the last to disconnect, because it is the master connection that all the other connections are using. So if you log out of the master, it appears to just hang. To solve this, I would like to wrap ssh with a script that checks if the control master file exists, and if not, starts a master ssh process in the background. Then it would start a slave ssh session. In order to accomplish this, my script would have to determine the path to the control file that ssh would use. This would entail parsing the ssh command line options and config files and implementing the logic for determining the ControlPath. Is there any way to just ask ssh what path it would use, so I can check it?

    Read the article

  • Canonical links for huge websites

    - by Florin
    Let's say I have 5 products that are identical but the product code, the product color specifications and the product image. The title, meta and description are identical (by the way the color is in a select form). I made 4 products link canonical to the 1 that is the master based on many factors. If the master becomes inactive or without a stock one product from the other 4 will become the new master and the rest will become canonical to it. The question is if that by becomeing master from canonical will the site suffer a penalty from Google or it will work just fine? What will Google think about this strategy?

    Read the article

  • Multiple Actions (Forms) on one Page - How not to lose master data, after editing detail data?

    - by nWorx
    Hello all, I've got a form where users can edit members of a group. So they have the possibilty to add members or remove existing members. So the Url goes like ".../Group/Edit/4" Where 4 is the id of the group. the view looks like this <% using (Html.BeginForm("AddUser", "Group")) %> <%{%> <label for="newUser">User</label> <%=Html.TextBox("username")%> <input type="submit" value="Add"/> </div> <%}%> <% using (Html.BeginForm("RemoveUser", "Group")) %> <%{%> <div class="inputItem"> <label for="groupMember">Bestehende Mitglieder</label> <%= Html.ListBox("groupMember", from g in Model.GetMembers() select new SelectListItem(){Text = g}) %> <input type="submit" value="Remove" /> </div> <%}%> The problem is that after adding or removing one user i lose the id of the group. What is the best solution for solving this kind of problem? Should I use hidden fields to save the group id? Thanks in advance.

    Read the article

  • How to override old design with new design(using master pages)?

    - by arun
    Hi, iam migrating the site from asp.net1.1 to asp.net3.5, iam using new design layout with masterpage concept,in previeous site some pages are having usercontrols,these are mixed with old design, if i convert it using masterpage, iam getting old design combining with new design...pls can any one will help?iam fresher to development. Thanks in advance.

    Read the article

  • How to get Master and Slave Table data in one row using SQL Server ?

    - by Space Cracker
    I have main table called 'Employee' and another slave table called 'EmployeeTypes' that has a FK from 'Employee'. Each row in 'Employee' can have zero or many rows in 'EmployeeTypes' and I want to make an SQL Query that returns data of all Employees and each employee row should contain its related data in 'EmployeeTypes' (for example column called 'TypeID') as a comma separated list, like this: Meco Beco --- 45 ---- 1,2,3

    Read the article

  • Looking for the better way to combine deep architecture refactoring with feature based development

    - by voroninp
    Problem statement: Given: TFS as Source Control Heavy desktop client application with tons of legacy code with bad or almost absent architecture design. Clients constantly requiring new features with sound quality, fast delivery and constantly complaining on user unfriendly UI. Problem: Application undoubtedly requires deep refactoring. This process inevitably makes application unstable and dedicated stabilization phase is needed. We've tried: Refactoring in master with periodical merges from master (MB) to feature branch (FB). (my mistake) Result: Many unstable branches. What we are advised: Create additional branch for refactoring (RB) periodically synchronizing it with MB via merge from MB to RB. After RB is stabilized we substitute master with RB and create new branch for further refactoring. This is the plan. But here I expect the real hell of merging MB to RB after merging any FB to MB. The main advantage: Stable master most of the time. Are there any better alternatives to the procees?

    Read the article

  • How to suppress "Not collecting exported resources without storeconfigs"?

    - by Andy Shinn
    I'm getting the following in my Puppet master syslog over and over: Sep 27 11:52:05 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs I'm not actually using storeconfigs: [ashinn@puppet1 ~]$ cat /etc/puppet/puppet.conf [agent] server = puppet.mydomain.com environment = production report = true [main] logdir = /var/log/puppet vardir = /var/lib/puppet ssldir = /var/lib/puppet/ssl rundir = /var/run/puppet factpath = $vardir/lib/facter pluginsync = true certname = puppet1.mydomain.com [master] modulepath = $confdir/environments/$environment/modules manifest = $confdir/environments/$environment/manifests/site.pp templatedir = $confdir/templates autosign = $confdir/autosign.conf ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY report = true reports = hipchat Any way I can suppress these messages? What do they actually come from?

    Read the article

  • c++ / c confusion

    - by mrbuxley
    Im trying to make a small app in c++ that saves midifiles with this library. http://musicnote.sourceforge.net/docs/html/index.html The sample code that is given on the homepage looks like this. #include "MusicNoteLib.h" void main() { MusicNoteLib::Player player; // Create the Player Object player.Play("C D E F G A B"); // Play the Music Notes on the default MIDI output port } This piece of code won't compile in Visual studio 2008, I get many errors like MusicNoteLib.h(22) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int I don't understand the error or where to start looking... There also was some dll files that can be used instead of this h file. #ifndef __MUSICNOTE_LIB_H__EBEE094C_FF6E_43a1_A6CE_D619564F9C6A__ #define __MUSICNOTE_LIB_H__EBEE094C_FF6E_43a1_A6CE_D619564F9C6A__ /** @file MusicNoteLib.h * \brief Main header file for accessing the MusicNote Library */ /// <Summary> /// This header file can be included directly in your project or through /// MusicNoteLib.h of the MusicNoteDll project. If included directly, this /// will be built directly as a satic library. If included through MusicNoteDll /// this will use dllImports through MUSICNOTELIB_API /// </Summary> #ifndef MUSICNOTELIB_API #define MUSICNOTELIB_API #endif // MUSICNOTELIB_API //#include "Player.h" namespace MusicNoteLib /// Music Programming Library { typedef void (__stdcall *LPFNTRACEPROC)(void* pUserData, const TCHAR* szTraceMsg); typedef void (__stdcall *LPFNERRORPROC)(void* pUserData, long lErrCode, const TCHAR* szErrorMsg, const TCHAR* szToken); extern "C" { MUSICNOTELIB_API typedef void MStringPlayer; MUSICNOTELIB_API void* GetCarnaticMusicNoteReader(); /// <Summary> /// Creates a MusicString Player object. /// </Summary> MUSICNOTELIB_API MStringPlayer* CreateMusicStringPlayer(); /// <Summary> /// Plays Music string notes on the default MIDI Output device with the default Timer Resolution. /// Use PlayMusicStringWithOpts() to use custom values. /// @param szMusicNotes the Music string to be played on the MIDI output device /// @return True if the notes were played successfully, False otherwise /// </Summary> MUSICNOTELIB_API bool PlayMusicString(const TCHAR* szMusicNotes); /// <Summary> /// Same as PlayMusicString() except that this method accepts Callbacks. /// The Trace and Error callbacks will be used during the Parse of the Music Notes. /// @param szMusicNotes the Music string to be played on the MIDI output device /// @param traceCallbackProc the Callback to used to report Trace messages /// @param errorCallbackProc the Callback to used to report Error messages /// @param pUserData any user supplied data that should be sent to the Callback /// @return True if the notes were played successfully, False otherwise /// </Summary> MUSICNOTELIB_API bool PlayMusicStringCB(const TCHAR* szMusicNotes, LPFNTRACEPROC traceCallbackProc, LPFNERRORPROC errorCallbackProc, void* pUserData); /// <Summary> /// Plays Music string notes on the given MIDI Output device using the given Timer Resolution. /// Use PlayMusicString() to use default values. /// @param szMusicNotes the Music notes to be played /// @param nMidiOutPortID the device ID of the MIDI output port to be used for the play /// @param nTimerResMS preferred MIDI timer resolution, in MilliSeconds /// @return True if Play was successful, False otherwise /// </Summary> MUSICNOTELIB_API bool PlayMusicStringWithOpts(const TCHAR* szMusicNotes, int nMidiOutPortID, unsigned int nTimerResMS); /// <Summary> /// Same as PlayMusicStringWithOpts() except that this method accepts Callbacks. /// The Trace and Error callbacks will be used during the Parse of the Music Notes. /// @param szMusicNotes the Music notes to be played /// @param nMidiOutPortID the device ID of the MIDI output port to be used for the play /// @param nTimerResMS preferred MIDI timer resolution, in MilliSeconds /// @param traceCallbackProc the Callback to used to report Trace messages /// @param errorCallbackProc the Callback to used to report Error messages /// @param pUserData any user supplied data that should be sent to the Callback /// @return True if Play was successful, False otherwise /// </Summary> MUSICNOTELIB_API bool PlayMusicStringWithOptsCB(const TCHAR* szMusicNotes, int nMidiOutPortID, unsigned int nTimerResMS, LPFNTRACEPROC traceCallbackProc, LPFNERRORPROC errorCallbackProc, void* pUserData); /// <Summary> /// Save the given MusicString content into a MIDI output file /// @param szMusicNotes Music Notes to be converted to MIDI output /// @param szOutputFilePath path of the MIDI output file /// @return True if the the content was saved successfully, False otherwise /// </Summary> MUSICNOTELIB_API bool SaveAsMidiFile(const TCHAR* szMusicNotes, const char* szOutputFilePath); //MUSICNOTELIB_API typedef void (*ParseErrorProc)(const MusicNoteLib::CParser*, MusicNoteLib::CParser::ErrorEventHandlerArgs* pEvArgs); //MUSICNOTELIB_API typedef void (*ParseTraceProc)(const MusicNoteLib::CParser*, MusicNoteLib::CParser::TraceEventHandlerArgs* pEvArgs); MUSICNOTELIB_API void Parse(const TCHAR* szNotes, LPFNTRACEPROC traceCallbackProc, void* pUserData); } // extern "C" } // namespace MusicNoteLib #endif // __MUSICNOTE_LIB_H__EBEE094C_FF6E_43a1_A6CE_D619564F9C6A__

    Read the article

  • What are some of the core principles needed to master Multi threading using Delphi?

    - by Gary Becks
    I am kind of new to programming in general (about 8 months with on and off in delphi and a little python here and there) and I am in the process of buying some books. I am interested in learning about concurrent programming and building multi threaded apps using Delphi. Whenever I do a search for "multithreading delphi" or "delphi multithreading tutorial" I seem to get conflicting results as some of the stuff is about using certain libraries (omnithread library) and other stuff seems to be more geared towards programmers with more experience. I have studied quite a few books on delphi and for the most part they seem to kind of skim the surface and not really go into depth on the subject. I have a friend who is a programmer (he uses c++) who recommends I learn what is actually going on with the underlying system when using threads as opposed to jumping into how to actually implement them in my programs first. On amazon.com there are quite a few books on concurrent programming but none of them seem to be made with Delphi in mind. Basically I need to know what are the main things I should be focused on learning before jumping into using threads, if I can/should attempt to learn them using books that are not specifically aimed at delphi developers (don't want to confuse myself reading books with a bunch of code examples in other languages right now) and if there are any reliable resources/books on the subject that anyone here could recommend. Thanks in advance.

    Read the article

  • Git doesn't sync files until committed, even if checked out in a different branch

    - by DertWaiter
    Okay, I have git 1.7.11.1 on Windows and I have a local test repository with 2 branches. One is master with index.php and help.php. I then create another branch called slave :) I run from git bash rm help.php and it disappears from the folder, but I don't stage anything. I switch to checkout master branch and it is supposed to restore file help.php because it is not modified in the master branch, isn't it? And it does not do it. When I go back to the slave branch and commit and then switch to checkout master then help.php appears. Is that the way it is supposed to to work? Why?

    Read the article

  • how much is a graduate Information Technology (IT) degree worth?

    - by T. Webster
    I have a bachelor's degree in IS/MIS not CS, and a few years of software development experience now. I want to pursue some sort of graduate education that will give me more career options. I was wondering if it's worth pursuing a graduate degree in Information Technology (not CS). -How much is a master's degree in Information Technology worth? By this I mean how marketable does it make you, and how valuable is the education itself. -How does its worth compare to a master's in CS? -What types of employers are looking for the IT master's degree vs. a Computer Science master's degree? -What's the most valuable thing an Information Technology degree can give, that someone wouldn't have without the degree? Thanks.

    Read the article

  • Importing a spreadsheet, but having trouble.

    - by Chris Phelps
    I have a form that allows a user to import a spreadsheet. This spreadsheet is generally static when it comes to column headers, but now the users want to be able to include an optional column (called Notes). My code crashes when I try to read the column from the spreadsheet if it doesn't exist. Dim objCommand As New OleDbCommand() objCommand = ExcelConnection() 'function that opens spreadsheet and returns objCommand Dim reader As OleDbDataReader reader = objCommand.ExecuteReader() While reader.Read() Dim Employee As String = Convert.ToString(reader("User")) Dim SerialNUM As String = Convert.ToString(reader("serialno")) **Dim Notes As String = Convert.ToString(reader("notes"))** If the spreadsheet contains a Notes column, all goes well. If not, crash. How can I check to see if the Notes column exists in the spreadsheet to avoid the crash?

    Read the article

  • Why is there no Microsoft Certified Master program targetted at developers?

    - by Jason Coyne
    In the lower level certifications, developer technology is all over the place. At the highest level (Microsoft Certified Architect), the Solutions track appears to be a good fit for high level application designers and architects. MCA requires an MCM as a prerequisite. However, none of the MCM tracks are targeted towards development. Obviously to be a good architect you need to have knowledge of other technologies, servers, sql, messaging etc. But those seem like things that should be part of the course load, and not the sole focus. Are the lower tiers really as high as you can go for application focused professionals? For most developers, the SQL MCM seems to be the best fit. Are the MCM and MCA really targeted ad more administrators and not developers?

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >