Search Results

Search found 108 results on 5 pages for 'marko jarvenpaa'.

Page 2/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • Caching of toolbar names in ArcMap

    - by Marko Apfel
    For little GUI-changes in own ArcMap-customizations normally it is enough to start the application and look that every entry is visible. Especially for refactoring of an own toolbar it was still enough to verify that the toolbar is already shown in the choice-list. But this is a fallacy: the entries there comes from a cache. You could verify this by dumping informations in the toolbar constructor. The constructor is only called if the toolbar is activated! Otherwise you see the toolbar name, but this name comes out of a cache. The cache is stored in the registry under: HKCU\Software\ESRI\ArcMap\Settings\CommandBarNameCache

    Read the article

  • Wise settings for Git

    - by Marko Apfel
    These settings reflecting my Git-environment. It a result of reading and trying several ideas of input from others. Must-Haves Aliases [alias] ci = commit st = status co = checkout oneline = log --pretty=oneline br = branch la = log --pretty=\"format:%ad %h (%an): %s\" --date=short df = diff dc = diff --cached lg = log -p lol = log --graph --decorate --pretty=oneline --abbrev-commit lola = log --graph --decorate --pretty=oneline --abbrev-commit --all ls = ls-files ign = ls-files -o -i --exclude-standard Colors [color] ui = auto [color "branch"] current = yellow reverse local = yellow remote = green [color "diff"] meta = yellow bold frag = magenta bold old = red bold new = green bold whitespace = red reverse [color "status"] added = green changed = red untracked = cyan Core [core] autocrlf = true excludesfile = c:/Users/<user>/.gitignore editor = 'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession –noPlugin Nice to have Merge and Diff [merge] tool = kdiff3 [mergetool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [mergetool "p4merge"] path = c:/Program Files (x86)/Perforce Merge/p4merge.exe cmd = p4merge \"$BASE\" \"$LOCAL\" \"$REMOTE\" \"$MERGED\" keepTemporaries = false trustExitCode = false keepBackup = false [diff] guitool = kdiff3 [difftool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [difftool "p4merge"] path = C:/Users/<user>/My Applications/Perforce Merge/p4merge.exe cmd = \"p4merge.exe $LOCAL $REMOTE\" .

    Read the article

  • Does Dart have any useful features for web programmers?

    - by marko
    http://www.dartlang.org/ I've checked out the site very briefly, and got curious. Is there any advantages of using Dart? Is it just a replacement for JavaScript? It looks like simpler Java. Writing quite a lot of C# at work, the language feels very much like what I'm used to, so learning the syntax looks like a breeze to learn. Has anybody any opinions or experiences with the language? (Compared to CoffeeScript (= I'm not doing Ruby syntax) the syntax looks more familiar to me).

    Read the article

  • Clear list of recent repositories in Git Extensions

    - by Marko Apfel
    Orphaned and wrong specified repositories in the recent list are annoying. Straightaway I does not found an option to clean this entries. And also not the persistence place for that. So it was time for Process Explorer. The storage happens under: HKEY_CURRENT_USER\Software\GitExtensions\GitExtensions\1.0.0.0 in the string value “history” You could edit the content of the string value or delete it – than during restarting Git Extensions the string value will be created with a default skeleton.

    Read the article

  • Does schema.org improve SEO?

    - by marko
    http://schema.org This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google, Yahoo! and Yandex rely on this markup to improve the display of search results, making it easier for people to find the right web pages. It sounds wonderful, but does the search spider ignore the extra attributes and elements? Is it just too clever and ignores it? May it also be that it lowers your visibility because of such alteration?

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Be careful when Git suppresses bin Folders

    - by Marko Apfel
    Initial situation Often for Visual Studio projects the typical content of a .gitignore file contains this line bin or [B|b]in It is used to avoid that Git tries to track compile outputs as repository relevant data. Problem But keep in mind: this will also suppress bin folders of additional stuff like frameworks and toolsets. For instance Microsoft.SDKs contains a folder named Bin with a lot of programs Simian contains a folder named bin with the program themselves If you store such artifacts also in the repository - according to the principle of a “self containing project” – you could lost the content in the bin folder! Solution Till yet I don’t have a good idea. So I verify for each new added toolset or framework whether it has or has not such a bin folder. If it has, then I must add this bin folder manually to the repository so that Git track it.

    Read the article

  • Entity Framework: Connecting to a mdf user database file via localDB during script execution

    - by Marko Apfel
    Problem If you run the “Generate database from model” wizard and execute the generated script the destination database could be the wrong one (for instance master of the SQL Server). Solution To use an own mdf attachable user database some connection information must specified during script execution. Execute your script opens the dialog “Connect to Server”. Press “Options” and go to the second tab “Connection Properties”. Select “Browse server” in the “Connect to database” dropdown box: Confirm the information dialog with “Yes”. In the following dialog you could choose your user database. Now the schema is created in the user database.

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Toolset agnostic build server and Silverlight projects

    - by Marko Apfel
    Problem Normally I try to have my continuous integration as most a possible toolset free to ensure that no local stuff could have an impact to my build. My Silverlight app references a special compile target in a folder outside my developer tree: <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> So I copied the stuff from this folder to a local one and changed the call to this target in my csproj: <Import Project="..\..\..\tools\WebApplications\Microsoft.WebApplication.targets" /> And now Visual Studio Conversion Wizard welcomes my with this: Solution Regardless of which line I write – this conversion comes back again and again, if the line has another form than <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> So it seems that there is no simple way to change this behaviour. Workaraound I must accept, that this line must be in the csproj and to run the build the toolset must be copied to the build server at the correct location. So go to your development machine where Visual Studio is installed and copy the folder “C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications” to your build server at the equivalent location.   Xmas wishes to Microsoft: Please provide technologies to let us developers bundle all needed stuff for a project in one developer tree. It should be possible that one checkout starts us up! No additional installations regardless whether it is a developing machine or dedicated build or continuous integration server. Silverlight is only one example, code analysis configurations could also be terrible and much more …

    Read the article

  • What is your opinion of Ext js?

    - by marko
    I'm thinking of pushing my javascript skills further and learn something new. Is Ext JS a good framework to program to work in or is it a pain in the ass? I would consider ext js for making awesome looking business applications, and the framework is huge, but to use a big library I have some fears that it's difficult, buggy and time-consuming. My fear is that I wouldn't want to use some bloatware.

    Read the article

  • Outlook hangs during startup by step &ldquo;loading profile&rdquo;

    - by Marko Apfel
    Problem Starting Outlook shows only the splash screen with comment “loading profile”. I could cancel the startup but restarting shows the same. I verified with Task Manager that no hidden outlook process is bother me. Solution Scanpst Normally with Outlook the tool “Microsoft Outlook Inbox Repair Tool” (scanpst.exe) is additionally installed. Some people could access it via Startmenu, but not me. My lovely Launchy found it under "C:\Program Files (x86)\Microsoft Office\Office14\SCANPST.EXE" Scanpst first ask you for the pst file which you would like to scan. I started with the first default offer: C:\Users\…\AppData\Local\Microsoft\Outlook\….ost And this brings up the information, that another application uses this file. Handle To investigate the causer Handle from Sysinternals is your friend in such cases. Start it from an administrative console and pipe the output to a file. handle > c:\temp\handle.txt Now you could open this file with the editor of your choose and search for the blocked file (your pst file). On top of the section you see the application which has a handle to this file opened (SfdcMsO1.exe). Task Manager Kill this application and start Outlook again. And voila – everything starts up fine … by me

    Read the article

  • Librated error when creating partition table

    - by Marko
    I bought a Dell Inspiron 5521 laptop a few days ago that came with Ubuntu preinstalled. I haven't used Ubuntu yet, and I don't have any experience in using it. I wanted to install Windows 7 64-bit on my laptop alongside Ubuntu, and made two bootable USB drives with Gparted and Windows 7. There wasn't a suitable partition on my laptop in which I could install Windows 7. I've read the instructions for using Gparted to create or manage my hard drive. I inserted the USB, booted from BIOS, and followed the procedure in installing Gparted. Then I entered Gparted, and the following error occurred: Librated error when Creating partition table. It asked me to click on either OK or Cancel. Either way I had my hard disk shown to me in the user window, in partitions that were made by the manufacturer: Partition File sys Label Size Flags /dev/sda1 fat32 dellutility 300.00 Mib diag /dev/sda2 fat32 os 3.00 Gib lba /dev/sda3 ext4 912.46 Gib boot /dev/sda4 extended 15.75 Gib (had a subpart) /dev/sda5 linux-swap 15.75 Gib ...and a option to switch to dev/sdb that's unused and of capacity 3Gib. I've used the biggest partition 912.46 Gib, and tried to reduce its size, and clicked OK. Then when I tried to make a new partition, it said it can't make any more partitions, no more than a maximum of 5. I would like to keep Ubuntu and slowly learn, but I also need to use programs that work in Windows. Thank you for taking the time to answer my question.

    Read the article

  • Running CopySourceAsHtml Add-in under Visual Studio 2010

    - by Marko Apfel
    Until now CopySourceAsHtml only supports Visual  Studio 2008 out of the box. But it is no problem to pimp up the config-file for supporting Visual Studio 2010. Copy all three files to "%userprofile%\Documents\Visual Studio 2010\Addins" Open CopySourceAsHtml.AddIn in a text editor and change both lines with <Version>9.0</Version> to <Version>10.0</Version> Run Visual Studio 2010 and CopySourceAsHtml works fine

    Read the article

  • @Microsoft: please provide universal and professional concepts

    - by Marko Apfel
    Why such constructs are included in the csproj-Files? <CodeAnalysisRuleSetDirectories>;c:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\\Rule Sets</CodeAnalysisRuleSetDirectories> <CodeAnalysisRuleDirectories>;c:\Program Files (x86)\Microsoft Visual Studio 10.0\Team Tools\Static Analysis Tools\FxCop\\Rules</CodeAnalysisRuleDirectories> So it every projects needs some manual steps to clean the project file so the solution could be build on a continuous integration server. That annoying! And also in a Visual Studio mixed editions team that’s too specific for the ultimate edition. As good as Visual Studio in most cases is, sometimes it is really far away from professional coding fundamentals and best practices.

    Read the article

  • Lenovo ThinkPad W530 problem to activate the optical/DVD drive

    - by Marko Apfel
    Problem Sometimes my notebook shows the optical drive as power off: But the hint there is not changing this state. Solution By looking in the device manager you see the next problem: So open the properties via right mouse click. This gives you the hint to remove the drive first. “Windows cannot use this hardware device because it has been prepared for "safe removal", but it has not been removed from the computer. (Code 47)” Whether you select the comment by dragging the mouse over the the hidden part or pressing the button “Properties”. So we unplug and reinsert the ultrabay. If you think, now the system is working – you are wrong. Now the system is the meaning, that the ultrabay is unplugged. You could verify this by refresh the view in the device panel. Now there is no longer our device. Yet your great gig comes – unplug the ultra bay and reinsert it a second time! After this you could hear with a media inside, that the motor is really started and we have a working device What a difficult birth …

    Read the article

  • Find out the URL of a RSS feed in Outlook

    - by Marko Apfel
    Challenge In the past I added some RSS feeds to outlook. For one of these feeds now I would like to know the original URL. But this is not very intuitive to eliminate. Problem Via (intuitive) right mouse click you could open a properties dialog of a feed. But there is no hint for the URL! Solution You could find the information in the RSS Feed Options dialog Open Account Settings Double click the feed and voila - here is the URL

    Read the article

  • Entity Framework, Code First: where is the database?

    - by Marko Apfel
    With Entity Framework 5 in Visual Studio 2012 the code first feature could let you come to the question “Where is the automatically created database located?” I run in the question after changing the model which throws during the next run this error: “The model backing the 'MyContext' context has changed since the database was created. Consider using Code First Migrations to update the database (http://go.microsoft.com/fwlink/?LinkId=238269).” Okay – clear I thought “delete the database”. But where is the database and what type is it??? In this constellation the frameworks generates a localDB. You could access this database via SQL Server Object Explorer. For the first time you have to add this localDB. The server name is “(localdb)\v11.0”: And so we could browse through the content of this database. It got the same name like the context class.

    Read the article

  • Fix folder scrolling problem in navigation pane of Explorer

    - by Marko Apfel
    Since my first steps with Win7 I hate the behavior of Explorer to scroll down the current expanded folder in the navigation pane. Today I found a solution in this thread: Bug: Windows Explorer expands folders inappropriately, jumping the folder you expand to the bottom of the navigation pane Download and install Classic Shell Activate the classic explorer bar and choose options Verify that “Fix folder scrolling” is checked Verify fixed behavior If necessary deinstall Classic Shell – the fix is persisted

    Read the article

  • Has Dart something useful that javascript doesn't have?

    - by marko
    http://www.dartlang.org/ I've checked out the site very briefly, and got curious. Is there any advantages of using Dart? Is it just a replacement for javascript? It looks like simpler java. Writing quite a lot of C# at work, the language feels very much like what I'm used to, so learning the syntax looks like a breeze to learn. Has anybody any opinions or experiences with the language? (Compared to coffeescript (= I'm not doing ruby syntax) the syntax looks more familiar to me).

    Read the article

  • What should a system architect know? [closed]

    - by marko
    What does the title System Architect really mean? What do you need to know to be a system architect. Do you need to know design patterns or something like that? At work we have a couple so called "System Architects". They do the same stuff as we developers, but the distinction are that they are older and thus has more experience and knowledge of the business domain we work in. But I'm not really seeing the architecting side of work really...

    Read the article

  • Are there any good javascript libraries for programming with html5 canvas element? [closed]

    - by marko
    I think there are some js-libraries with programming the html5 canvas element. Which one to choose? I've done some js-coding with canvas and are somewhat familiar with the api. But I think somekind of library which encapsulates the somewhat tedious canvas api would be a good thing, so it speeds up the development. Easeljs http://www.createjs.com/#!/EaselJS/download and Paper.js - http://paperjs.org/about/ Anybody who has any experience with any js-libraries for canvas? And any recommendations?

    Read the article

  • Interesting issue with WCF wsHttpBinding through a Firewall

    - by Marko
    I have a web application deployed in an internet hosting provider. This web application consumes a WCF Service deployed at an IIS server located at my company’s application server, in order to have data access to the company’s database, the network guys allowed me to expose this WCF service through a firewall for security reasons. A diagram would look like this. [Hosted page] --- (Internet) --- |Firewall <Public IP>:<Port-X >| --- [IIS with WCF Service <Comp. Network Ip>:<Port-Y>] link text I also wanted to use wsHttpBinding to take advantage of its security features, and encrypt sensible information. After trying it out I get the following error: Exception Details: System.ServiceModel.EndpointNotFoundException: The message with To 'http://<IP>:<Port>/service/WCFService.svc' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree. Doing some research I found out that wsHttpBinding uses WS-Addressing standards, and reading about this standard I learned that the SOAP header is enhanced to include tags like ‘MessageID’, ‘ReplyTo’, ‘Action’ and ‘To’. So I’m guessing that, because the client application endpoint specifies the Firewall IP address and Port, and the service replies with its internal network address which is different from the Firewall’s IP, then WS-Addressing fires the above message. Which I think it’s a very good security measure, but it’s not quite useful in my scenario. Quoting the WS-Addressing standard submission (http://www.w3.org/Submission/ws-addressing/) "Due to the range of network technologies currently in wide-spread use (e.g., NAT, DHCP, firewalls), many deployments cannot assign a meaningful global URI to a given endpoint. To allow these ‘anonymous’ endpoints to initiate message exchange patterns and receive replies, WS-Addressing defines the following well-known URI for use by endpoints that cannot have a stable, resolvable URI. http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous" HOW can I configure my wsHttpBinding Endpoint to address my Firewall’s IP and to ignore or bypass the address specified in the ‘To’ WS-Addressing tag in the SOAP message header? Or do I have to change something in my service endpoint configuration? Help and guidance will be much appreciated. Marko. P.S.: While I find any solution to this, I’m using basicHttpBinding with absolutely no problem of course.

    Read the article

  • Problem connecting to isp server using xl2tpd as client. Ubuntu server 13.04

    - by Deon Pretorius
    I have followed guides found on google and ubuntu support pages and can get xl2tpd connection up but only under the following conditions: 1 - ADSL model must be configured and connected to the ISP or 2 - ADSL modem in bridge mode I must have an existing PPPoe connection established. If neither of the above are active xl2tpd wont trigger pppd and connect to the isp and thus tunnel connection fails to connect to the L2TP server of the ISP. Am I doing something wrong; /etc/ppp/options.l2tpd.axxess ipcp-accept-local ipcp-accept-remote refuse-eap refuse-chap require-pap noccp noauth idle 1800 mtu 1200 mru 1200 defaultroute usepeerdns debug lock connect-delay 5000 name (name used for ppp connection) /etc/ppp/pap-secrets # * password (name used for ppp connection as above) * (ppp password supplied by isp) /etc/xl2tpd/xl2tpd.conf [global] ; Global parameters: auth file = /etc/xl2tpd/l2tp-secrets ; * Where our challenge secrets are access control = yes ; * Refuse connections without IP match debug tunnel = yes [lac axxess] lns = 196.30.121.50 ; * Who is our LNS? redial = yes ; * Redial if disconnected? redial timeout = 5 ; * Wait n seconds between redials max redials = 5 ; * Give up after n consecutive failures hidden bit = yes ; * User hidden AVP's? length bit = yes ; * Use length bit in payload? require pap = yes ; * Require PAP auth. by peer require chap = no ; * Require CHAP auth. by peer refuse chap = yes ; * Refuse CHAP authentication require authentication = yes ; * Require peer to authenticate name = BLA85003@axxess ; * Report this as our hostname ppp debug = yes ; * Turn on PPP debugging pppoptfile = /etc/ppp/options.l2tpd.axxess ; * ppp options file for this lac /etc/xl2tpd/l2tp-secrets # Secrets for authenticating l2tp tunnels # us them secret # * marko blah2 # zeus marko blah # * * interop * vzb_l2tp (*** secret supplied by isp) ^ isp server host name Any help will be greatly appreciated

    Read the article

  • c# Counter requires 2 button clicks to update

    - by marko.ivanovski.nz
    Hi, I have a problem that has been bugging me all day. In my code I have the following: private int rowCount { get { return (int)ViewState["rowCount"]; } set { ViewState["rowCount"] = value; } } and a button event protected void addRow_Click(object sender, EventArgs e) { rowCount = rowCount + 1; } Then on Page_Load I read that value and create controls accordingly. I understand the button event fires AFTER the Page_Load fires so the value isn't updated until the next postback. Real nightmare. Here's the entire code: protected void Page_Load(object sender, EventArgs e) { string xmlValue = ""; //To read a value from a database if (xmlValue.Length > 0) { if (!Page.IsPostBack) { DataSet ds = XMLToDataSet(xmlValue); Table dimensionsTable = DataSetToTable(ds); tablePanel.Controls.Add(dimensionsTable); DataTable dt = ds.Tables["Dimensions"]; rowCount = dt.Rows.Count; colCount = dt.Columns.Count; } else { tablePanel.Controls.Add(DataSetToTable(DefaultDataSet(rowCount, colCount))); } } else { if (!Page.IsPostBack) { rowCount = 2; colCount = 4; } tablePanel.Controls.Add(DataSetToTable(DefaultDataSet(rowCount, colCount))); } } protected void submit_Click(object sender, EventArgs e) { resultsLabel.Text = Server.HtmlEncode(DataSetToStringXML(TableToDataSet((Table)tablePanel.Controls[0]))); } protected void addColumn_Click(object sender, EventArgs e) { colCount = colCount + 1; } protected void addRow_Click(object sender, EventArgs e) { rowCount = rowCount + 1; } public DataSet TableToDataSet(Table table) { DataSet ds = new DataSet(); DataTable dt = new DataTable("Dimensions"); ds.Tables.Add(dt); //Add headers for (int i = 0; i < table.Rows[0].Cells.Count; i++) { DataColumn col = new DataColumn(); TextBox headerTxtBox = (TextBox)table.Rows[0].Cells[i].Controls[0]; col.ColumnName = headerTxtBox.Text; col.Caption = headerTxtBox.Text; dt.Columns.Add(col); } for (int i = 0; i < table.Rows.Count; i++) { DataRow valueRow = dt.NewRow(); for (int x = 0; x < table.Rows[i].Cells.Count; x++) { TextBox valueTextBox = (TextBox)table.Rows[i].Cells[x].Controls[0]; valueRow[x] = valueTextBox.Text; } dt.Rows.Add(valueRow); } return ds; } public Table DataSetToTable(DataSet ds) { DataTable dt = ds.Tables["Dimensions"]; Table newTable = new Table(); //Add headers TableRow headerRow = new TableRow(); for (int i = 0; i < dt.Columns.Count; i++) { TableCell headerCell = new TableCell(); TextBox headerTxtBox = new TextBox(); headerTxtBox.ID = "HeadersTxtBox" + i.ToString(); headerTxtBox.Font.Bold = true; headerTxtBox.Text = dt.Columns[i].ColumnName; headerCell.Controls.Add(headerTxtBox); headerRow.Cells.Add(headerCell); } newTable.Rows.Add(headerRow); //Add value rows for (int i = 0; i < dt.Rows.Count; i++) { TableRow valueRow = new TableRow(); for (int x = 0; x < dt.Columns.Count; x++) { TableCell valueCell = new TableCell(); TextBox valueTxtBox = new TextBox(); valueTxtBox.ID = "ValueTxtBox" + i.ToString() + i + x + x.ToString(); valueTxtBox.Text = dt.Rows[i][x].ToString(); valueCell.Controls.Add(valueTxtBox); valueRow.Cells.Add(valueCell); } newTable.Rows.Add(valueRow); } return newTable; } public DataSet DefaultDataSet(int rows, int cols) { DataSet ds = new DataSet(); DataTable dt = new DataTable("Dimensions"); ds.Tables.Add(dt); DataColumn nameCol = new DataColumn(); nameCol.Caption = "Name"; nameCol.ColumnName = "Name"; nameCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(nameCol); DataColumn widthCol = new DataColumn(); widthCol.Caption = "Width"; widthCol.ColumnName = "Width"; widthCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(widthCol); if (cols > 2) { DataColumn heightCol = new DataColumn(); heightCol.Caption = "Height"; heightCol.ColumnName = "Height"; heightCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(heightCol); } if (cols > 3) { DataColumn depthCol = new DataColumn(); depthCol.Caption = "Depth"; depthCol.ColumnName = "Depth"; depthCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(depthCol); } if (cols > 4) { int newColCount = cols - 4; for (int i = 0; i < newColCount; i++) { DataColumn newCol = new DataColumn(); newCol.Caption = "New " + i.ToString(); newCol.ColumnName = "New " + i.ToString(); newCol.DataType = System.Type.GetType("System.String"); dt.Columns.Add(newCol); } } for (int i = 0; i < rows; i++) { DataRow newRow = dt.NewRow(); newRow["Name"] = "Name " + i.ToString(); newRow["Width"] = "Width " + i.ToString(); if (cols > 2) { newRow["Height"] = "Height " + i.ToString(); } if (cols > 3) { newRow["Depth"] = "Depth " + i.ToString(); } dt.Rows.Add(newRow); } return ds; } public DataSet XMLToDataSet(string xml) { StringReader sr = new StringReader(xml); DataSet ds = new DataSet(); ds.ReadXml(sr); return ds; } public string DataSetToStringXML(DataSet ds) { XmlDocument _XMLDoc = new XmlDocument(); _XMLDoc.LoadXml(ds.GetXml()); StringWriter sw = new StringWriter(); XmlTextWriter xw = new XmlTextWriter(sw); XmlDocument xml = _XMLDoc; xml.WriteTo(xw); return sw.ToString(); } private int rowCount { get { return (int)ViewState["rowCount"]; } set { ViewState["rowCount"] = value; } } private int colCount { get { return (int)ViewState["colCount"]; } set { ViewState["colCount"] = value; } } Thanks in advance, Marko

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >