Search Results

Search found 18319 results on 733 pages for 'reference parameters'.

Page 97/733 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Pretty URL in ADF Faces of JDeveloper 11.1.2.2

    - by Frank Nimphius
    Many features planned for Oracle JDeveloper 12c find their way into current releases of Oracle JDeveloper 11g R1 and JDeveloper 11g R2. One example of such a feature is "pretty URL" - or "clean URL" as the Oracle JDeveloper 11g R2 (11.1.2.2) documentation puts it. "A.2.3.24 Clean URLs Historically, ADF Faces has used URL parameters to hold information, such as window IDs and state. However, URL parameters can prevent search engines from recognizing when URLs are actually the same, and therefore interfere with analytics. URL parameters can also interfere with bookmarking. By default, ADF Faces removes URL parameters using the HTML5 History Management API. If that API is unavailable, then session cookies are used.You can also manually configure how URL parameters are removed using the context parameter oracle.adf.view.rich.prettyURL.OPTIONS. Set the parameter to off so that no parameters are removed. Set the parameter to useHistoryApi to only use the HTML5 History Management API. If a browser does not support this API, then no parameters will be removed. Set the parameter to useCookies to use session cookies to remove parameters. If the browser does not support cookies, then no parameters will be removed." See: http://docs.oracle.com/cd/E26098_01/web.1112/e16181/ap_config.htm#ADFUI12856 So basically, what this part in the documentation says is: In JDeveloper 11g R2 (11.1.2.2), Oracle ADF Faces automatically removes its internally used dynamic parameters from the URL You can influence the setting with the prettyURL.OPTIONS context option, which however is not recommended you to do because the default behavior is able to detect if the browser client supports HTML 5 History management or not. In the latter case it the uses a session cookie and if this doesn't work, falls back to the "old" URL parameter adding. The information that is not so explicit and clearly mentioned in the documentation is that this is only for ADF Faces parameters (such as _afrLoop, Adf-Window-Id, etc.), but not the ADF controller token (_adf.ctrl-state)! Removing the ADF controller token is an enhancement request that will be implemented in Oracle JDeveloper 12c

    Read the article

  • Can the parameters of the Music Lens be edited?

    - by Ryan McClure
    I primarily listen to classical music on my laptop. Since I'm obsessed with specifics with my music, I am precise with how I label my genres (Opera, Symphony, Chorale, etc.). Is there a way to edit the Music Lens so that instead of listing "Blues, Classic, Country..." it could list custom parameters? Could the same be done for the "Decade" parameter? Maybe make it "Century", since I have music from back in the 1400's :)

    Read the article

  • libgtk2.0-common fails to build with Gdk-2.0.gir error, Type reference 'GdkPixbuf' not found

    - by Stefano Palazzo
    I'm trying to build gtk, but it fails. Here's what I'm doing: sudo apt-get build-dep libgtk2.0-common sudo apt-get source libgtk2.0-common cd gtk+2.0-2.22.0/ sudo gedit gtk/gtktreeview.c & #...editing a few files (or not, it's the same error) sudo ./configure --prefix=/usr sudo make The compilation runs for a while and then quits: Gdk-2.0.gir: error: Type reference 'GdkPixbuf' not found ... make: *** [all] Error 2 What am I doing wrong?

    Read the article

  • How to get a Read-Write Reference to Parent GameObject from a script component attached to it?

    - by onguarde
    I have a game object(object) with a script component(myscript) attached. I have a reference to myscript component through getComponent, and I want to change the transform of the gameObject the script is attached to. myscript.gameObject.transform = (new value); The above code gives me error, Property 'UnityEngine.GameObject.transform' is read only. Is there a way to get a read-write version?

    Read the article

  • Cross-compiling with OpenSSL for Windows

    - by singpolyma
    I'm trying to compile the oauth-utils http://mir.dnsalias.com/oss/oauth/start for Windows from Ubuntu. I have compiled it on Windows before (a few months back), but wanted to try cross-compiling. I got openssl build using mingw32 ok, and put libssl.a and libcrypto.a in the right place. The linker is now finding the libraries (yay!) but I get the following error: /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xaac): undefined reference to `_CreateDCA@16' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xab9): undefined reference to `_CreateCompatibleDC@4' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xacc): undefined reference to `_GetDeviceCaps@8' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xadc): undefined reference to `_GetDeviceCaps@8' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xaf4): undefined reference to `_CreateCompatibleBitmap@12' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xb04): undefined reference to `_SelectObject@8' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xb18): undefined reference to `_GetObjectA@12' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xb81): undefined reference to `_BitBlt@36' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xb8c): undefined reference to `_GetBitmapBits@12' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xbe5): undefined reference to `_SelectObject@8' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xbec): undefined reference to `_DeleteObject@4' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xbf6): undefined reference to `_DeleteDC@4' /usr/lib/gcc/i586-mingw32msvc/4.2.1-sjlj/../../../../i586-mingw32msvc/lib/libcrypto.a(rand_win.o):rand_win.c:(.text+0xc00): undefined reference to `_DeleteDC@4' Any ideas what could be causing this? Thanks.

    Read the article

  • Compiling C++ code with mingw under 12.04

    - by golemit
    I tried to setting up compiling of the C++ projects under my Ubuntu 12.04 by mingw with QT libraries. The idea was to get executable independent from variations of target Windows versions and development environments of my colleagues. It was successfully implemented under OpenSuse 12.2 with mingw32 and some additional libraries including mingw32-libqt4 and some others. Fine. However when trying to do the same under Ubuntu 12.04 with mingw-w64 including latest libraries QT-4.8.3 copied from Windows there were always errors. No luck. The typical errors in these attempts can be seen in attachments. The commands used: qmake -spec /path_to_my_conf/win32-x-g++ my_project.pro make Can someone give a hint of the problem source? I would appreciate a good advice. Serge some exctracts from LOG: ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0xec): undefined reference to `QDialog::accept()' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0xf0): undefined reference to `QDialog::reject()' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x104): undefined reference to `non-virtual thunk to QWidget::devType() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x108): undefined reference to `non-virtual thunk to QWidget::paintEngine() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x10c): undefined reference to `non-virtual thunk to QWidget::getDC() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x110): undefined reference to `non-virtual thunk to QWidget::releaseDC(HDC__*) const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x114): undefined reference to `non-virtual thunk to QWidget::metric(QPaintDevice::PaintDeviceMetric) const' ./.obj/qrc_images.o:qrc_images.cpp:(.text+0x24): undefined reference to `__imp___Z21qRegisterResourceDataiPKhS0_S0_' ./.obj/qrc_images.o:qrc_images.cpp:(.text+0x64): undefined reference to `__imp___Z23qUnregisterResourceDataiPKhS0_S0_' collect2: ld returned 1 exit status

    Read the article

  • How does Trash Can works? Where can i find official specification / documentation / reference about it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always use <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, its annoying: if i have 15 users, i end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why it never uses it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash, it gives error. Try to delete any file, it says "it cant move to trash". Ok, i know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why error then? Bug? Why do i must change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find technical reference about Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what am i doing wrong, specially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for sticky bit (VFAT, NTFS), it probably dont have for permitions either (at least VFAT surely doesnt). So what prevents one user for purging / restoring other users ./Trash-xxx ? If one can read/write his own Trash, he can also do the same for the whole drive, including other's trashes, isnt it? Or does Gnome has any "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permitions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really want to use /.Trash/xxx format... For the root issue: for the / partition, i can trash as root, and it goes to /root/.local/shate/Trash. But if i click on Nautilus "Trash" (as root), i get an error. Dont you? So files are correctly trashed, but i cant access it. All i can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc) For non-/ partitions (or at least for VFAT/NTFS), I can not even trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permantly delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really cluttter desktop and/or Nautilus. Id rather have it pre mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives Si, if Ubuntu/Gnome truly follow the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • How does the Trash Can work, and where can I find official documentation, reference, or specification for it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always uses <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, it's annoying: if I have 15 users, I end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why does it never use it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash; it gives an error. Try to delete any file, it says "it can't move to trash". Ok, I know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why the error then? Bug? Why must I change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from the standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find a technical reference for Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what I am doing wrong, especially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for the sticky bit (VFAT, NTFS), it probably doesn't have for permissions either (at least VFAT surely doesn't). So what prevents one user from purging / restoring other users' ./Trash-xxx ? If one can read/write his own Trash, one can do the same for the whole drive, including other's trashes, correct? Or does Gnome have some kind of "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permissions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really prefer to use /.Trash/xxx format... For the root issue: for the / partition, I can use trash as root, and it goes to /root/.local/shate/Trash. But if I click on Nautilus "Trash" (as root), I get an error. Don't you? So files are correctly trashed, but I can't access it. All I can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc.). For non-/ partitions (or at least for VFAT/NTFS), I can not even use trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permanently delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really clutter the desktop and/or Nautilus. I'd rather have it pre-mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives. So, if Ubuntu/Gnome truly follows the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • Error compiling / linking e text editor on Linux

    - by jckdnk111
    The code compiles without too much complaint, but the last step fails with the error below. There is some discussion about it on the e forum, but still no answer. [LD] e ../external/out.release/lib/libpcre.a(pcre_tables.o):(.rodata+0x0): multiple definition of `_pcre_OP_lengths' .objs.release/cx_pcre_tables.o:(.rodata+0x0): first defined here ../external/out.release/lib/libpcre.a(pcre_tables.o):(.rodata+0x70): multiple definition of `_pcre_utf8_table1' .objs.release/cx_pcre_tables.o:(.rodata+0x70): first defined here ../external/out.release/lib/libpcre.a(pcre_tables.o):(.rodata+0x88): multiple definition of `_pcre_utf8_table1_size' .objs.release/cx_pcre_tables.o:(.rodata+0x88): first defined here ../external/out.release/lib/libpcre.a(pcre_tables.o):(.rodata+0x8c): multiple definition of `_pcre_utf8_table2' .objs.release/cx_pcre_tables.o:(.rodata+0x8c): first defined here ../external/out.release/lib/libpcre.a(pcre_tables.o):(.rodata+0xa4): multiple definition of `_pcre_utf8_table3' .objs.release/cx_pcre_tables.o:(.rodata+0xa4): first defined here ../external/out.release/lib/libpcre.a(pcre_tables.o):(.rodata+0xc0): multiple definition of `_pcre_utf8_table4' .objs.release/cx_pcre_tables.o:(.rodata+0xc0): first defined here ../external/out.release/lib/libpcre.a(pcre_tables.o):(.rodata+0x180): multiple definition of `_pcre_utt_names' .objs.release/cx_pcre_tables.o:(.rodata+0x100): first defined here /usr/bin/ld: Warning: size of symbol `_pcre_utt_names' changed from 657 in .objs.release/cx_pcre_tables.o to 740 in ../external/out.release/lib/libpcre.a(pcre_tables.o) ../external/out.release/lib/libpcre.a(pcre_tables.o):(.rodata+0x480): multiple definition of `_pcre_utt' .objs.release/cx_pcre_tables.o:(.rodata+0x3a0): first defined here /usr/bin/ld: Warning: size of symbol `_pcre_utt' changed from 630 in .objs.release/cx_pcre_tables.o to 696 in ../external/out.release/lib/libpcre.a(pcre_tables.o) ../external/out.release/lib/libpcre.a(pcre_tables.o):(.rodata+0x738): multiple definition of `_pcre_utt_size' .objs.release/cx_pcre_tables.o:(.rodata+0x618): first defined here .objs.release/cx_pcre_exec.o: In function `match(doc_byte_iter, unsigned char const*, doc_byte_iter, int, match_data*, unsigned long, eptrblock*, int, unsigned int)': cx_pcre_exec.cpp:(.text+0x1c2a): undefined reference to `_pcre_ord2utf8(int, unsigned char*)' .objs.release/eauibook.o: In function `eAuiNotebook::LoadPerspective(wxString const&)': eauibook.cpp:(.text+0x9ad): undefined reference to `wxTabFrame::SetTabCtrlHeight(int)' .objs.release/PreviewDlg.o: In function `global constructors keyed to _ZN10PreviewDlg13sm_eventTableE': PreviewDlg.cpp:(.text+0x11b2): undefined reference to `wxEVT_WEB_TITLECHANGE' PreviewDlg.cpp:(.text+0x11ee): undefined reference to `wxEVT_WEB_DOMCONTENTLOADED' .objs.release/PreviewDlg.o: In function `PreviewDlg::RefreshBrowser(PreviewDlg::cxUpdateMode)': PreviewDlg.cpp:(.text+0x2a47): undefined reference to `wxWebControl::OpenURI(wxString const&, unsigned int, wxWebPostData*, bool)' .objs.release/PreviewDlg.o: In function `PreviewDlg::OnWebDocumentComplete(wxWebEvent&)': PreviewDlg.cpp:(.text+0x3259): undefined reference to `wxWebControl::GetCurrentURI() const' .objs.release/PreviewDlg.o: In function `PreviewDlg::PreviewDlg(EditorFrame&)': PreviewDlg.cpp:(.text+0x4984): undefined reference to `wxWebControl::IsInitialized()' PreviewDlg.cpp:(.text+0x49c5): undefined reference to `wxWebControl::wxWebControl(wxWindow*, int, wxPoint const&, wxSize const&)' PreviewDlg.cpp:(.text+0x562f): undefined reference to `wxWebControl::InitEngine(wxString const&)' .objs.release/PreviewDlg.o: In function `PreviewDlg::PreviewDlg(EditorFrame&)': PreviewDlg.cpp:(.text+0x68e4): undefined reference to `wxWebControl::IsInitialized()' PreviewDlg.cpp:(.text+0x6925): undefined reference to `wxWebControl::wxWebControl(wxWindow*, int, wxPoint const&, wxSize const&)' PreviewDlg.cpp:(.text+0x758f): undefined reference to `wxWebControl::InitEngine(wxString const&)' .objs.release/PreviewDlg.o: In function `PreviewDlg::OnButtonForward(wxCommandEvent&)': PreviewDlg.cpp:(.text+0x132): undefined reference to `wxWebControl::GoForward()' .objs.release/PreviewDlg.o: In function `PreviewDlg::OnButtonBack(wxCommandEvent&)': PreviewDlg.cpp:(.text+0x182): undefined reference to `wxWebControl::GoBack()' ../ecore/libecore.so(cxInternal.o): In function `cxInternal::MoveOldSettings(eSettings&)': cxInternal.cpp:(.text+0x4d29): undefined reference to `eSettings::SetPageSettings(unsigned int, wxString const&, doc_id, int, int, wxString const&, std::vector<unsigned int, std::allocator<unsigned int> > const&, std::vector<cxBookmark, std::allocator<cxBookmark> > const&, eSettings::SubPage)' collect2: ld returned 1 exit status make: *** [e] Error 1 EDIT: Forgot the link http://github.com/etexteditor/e

    Read the article

  • Passing parameters between interrupt handlers on a Cortex-M3 ...

    - by Captain NedD
    I'm building a light kernel for a Cortex-M3. From a high priority interrupt I'd like to invoke some code to run in a lower priority interrupt and pass some parameters along. I don't want to use a queue to post work to the lower priority interrupt. I just have a buffer and size to pass to it. In the proramming manual it says that the SVC interrupt handler is synchronous which presumably means that if you invoke it from an interrupt that's a lower priority than SVC's handler it gets called immediately (the upshot of this being that you can pass parameters to it as though it were a function call (a little like the BIOS calls in MS-DOS)). I'd like to do it the other way: passing parameters from a high priority interrupt to a lower priority one (at the moment I'm doing it by leaving the parameters in a fixed location in memory). What's the best way to do this (if at all possible)? Thanks,

    Read the article

  • "Object reference not set to an instance of an object": why can't .NET show more details?

    - by Simon Chadwick
    "Object reference not set to an instance of an object" This is probably one of the most common run-time errors in .NET. Although the System.Exception has a stack trace, why does the exception not also show the name of the object reference field, or at least its type? Over the course of a year I spend hours sifting through stack traces (often in code I did not write), hoping there is a line number from a ".pdb" file, then finding the line in the code, and even then it is often not obvious which reference on the line was null. Having the name of the reference field would be very convenient. If System.ArgumentNullException instances can show the name of the method parameter ("Value cannot be null. Parameter name: value"), then surely System.NullReferenceException instances could include the name of the null field (or its containing collection).

    Read the article

  • OJB Reference Descriptor 1:0 relationship? Should I set auto-retrieve to false?

    - by godzillasdm
    Hi, I am having an issue while using Apache OJB with Spring 2 inside my web app. I'm using OJB reference-descriptor with 2 foreign key properties. I have an object A (parent) and object B (referenced object). The thing is, for an object A, there may or may not be an object B. In the case where there is no object B to go with Object A, the object B seems to be instantiated (through Spring?) anyways. However, I am unable to access object B's members. Whenever I test if Object B == null, it always returns false even though there is no matching value in the database. Since this Object is never null, I figured I can test the object's member like so: if( objectb.getDocumentNumber == null) { return false; } However, I get an exception in the jsp: javax.servlet.jsp.el.ELException: An error occurred while getting property "documentNumber" from an instance class org.sample.pojo.Objectb$$EnhancerByCGLIB$$78022a2 and this exception in the debugger when it's creating the objectB: com.sun.jdi.InvocationException occurred invoking method. I am guessing that the reference-descriptor must be a 1:1+ relationship, instead of a 1:0+ relationship. I was wondering if I should set the property 'auto-retrieve' to false, and then use the PersistenceBroker.retrieveAllReferences(Object obj); method as directed. However, this method's return value is 'void', so I am guessing that Spring somehow creates, and sets the reference class for me. (Returning me back to the same issue I'm having.) I will need a way to test whether the reference object exists first. If not, don't call this retrieveAllReferences method, but I don't see how. Am I going about this all wrong? Does reference-descriptor not allow 1:0 relations? Any work around to my problem? Your suggestions are greatly appreciated!

    Read the article

  • Subterranean IL: Constructor constraints

    - by Simon Cooper
    The constructor generic constraint is a slightly wierd one. The ECMA specification simply states that it: constrains [the type] to being a concrete reference type (i.e., not abstract) that has a public constructor taking no arguments (the default constructor), or to being a value type. There seems to be no reference within the spec to how you actually create an instance of a generic type with such a constraint. In non-generic methods, the normal way of creating an instance of a class is quite different to initializing an instance of a value type. For a reference type, you use newobj: newobj instance void IncrementableClass::.ctor() and for value types, you need to use initobj: .locals init ( valuetype IncrementableStruct s1 ) ldloca 0 initobj IncrementableStruct But, for a generic method, we need a consistent method that would work equally well for reference or value types. Activator.CreateInstance<T> To solve this problem the CLR designers could have chosen to create something similar to the constrained. prefix; if T is a value type, call initobj, and if it is a reference type, call newobj instance void !!0::.ctor(). However, this solution is much more heavyweight than constrained callvirt. The newobj call is encoded in the assembly using a simple reference to a row in a metadata table. This encoding is no longer valid for a call to !!0::.ctor(), as different constructor methods occupy different rows in the metadata tables. Furthermore, constructors aren't virtual, so we would have to somehow do a dynamic lookup to the correct method at runtime without using a MethodTable, something which is completely new to the CLR. Trying to do this in IL results in the following verification error: newobj instance void !!0::.ctor() [IL]: Error: Unable to resolve token. This is where Activator.CreateInstance<T> comes in. We can call this method to return us a new T, and make the whole issue Somebody Else's Problem. CreateInstance does all the dynamic method lookup for us, and returns us a new instance of the correct reference or value type (strangely enough, Activator.CreateInstance<T> does not itself have a .ctor constraint on its generic parameter): .method private static !!0 CreateInstance<.ctor T>() { call !!0 [mscorlib]System.Activator::CreateInstance<!!0>() ret } Going further: compiler enhancements Although this method works perfectly well for solving the problem, the C# compiler goes one step further. If you decompile the C# version of the CreateInstance method above: private static T CreateInstance() where T : new() { return new T(); } what you actually get is this (edited slightly for space & clarity): .method private static !!T CreateInstance<.ctor T>() { .locals init ( [0] !!T CS$0$0000, [1] !!T CS$0$0001 ) DetectValueType: ldloca.s 0 initobj !!T ldloc.0 box !!T brfalse.s CreateInstance CreateValueType: ldloca.s 1 initobj !!T ldloc.1 ret CreateInstance: call !!0 [mscorlib]System.Activator::CreateInstance<T>() ret } What on earth is going on here? Looking closer, it's actually quite a clever performance optimization around value types. So, lets dissect this code to see what it does. The CreateValueType and CreateInstance sections should be fairly self-explanatory; using initobj for value types, and Activator.CreateInstance for reference types. How does the DetectValueType section work? First, the stack transition for value types: ldloca.s 0 // &[!!T(uninitialized)] initobj !!T // ldloc.0 // !!T box !!T // O[!!T] brfalse.s // branch not taken When the brfalse.s is hit, the top stack entry is a non-null reference to a boxed !!T, so execution continues to to the CreateValueType section. What about when !!T is a reference type? Remember, the 'default' value of an object reference (type O) is zero, or null. ldloca.s 0 // &[!!T(null)] initobj !!T // ldloc.0 // null box !!T // null brfalse.s // branch taken Because box on a reference type is a no-op, the top of the stack at the brfalse.s is null, and so the branch to CreateInstance is taken. For reference types, Activator.CreateInstance is called which does the full dynamic lookup using reflection. For value types, a simple initobj is called, which is far faster, and also eliminates the unboxing that Activator.CreateInstance has to perform for value types. However, this is strictly a performance optimization; Activator.CreateInstance<T> works for value types as well as reference types. Next... That concludes the initial premise of the Subterranean IL series; to cover the details of generic methods and generic code in IL. I've got a few other ideas about where to go next; however, if anyone has any itching questions, suggestions, or things you've always wondered about IL, do let me know.

    Read the article

  • how to send a dynamic set of parameters to webmethod using jquery ajax method?

    - by kranthi
    hi, I am using jquery ajax method on my aspx page,which will invoke the webmethod in the code behind.Currently the webmethod takes a couple of parameters like firstname,lastname,address etc which I am passing from jquery ajax method using data:JSON.stringify({fname:firstname,lname:lastname,city:city}) now my requirement has been changed such that,the number and type of parameters that are going to be passed is not fixed for ex.parameter combination can be something like fname,city or fname,city or city,lname or fname,lname,city or something else.So the webmethod should be such that it should accept any number parameters.I thought of using arrays to do so, as described here. But I do not understand how can I identify which and how many parameters have been passed to the webmethod to insert/update the data to the DB.Please could someone help me with this? thanks

    Read the article

  • How do I get a bundle reference from inside of a plugin with carbon?

    - by Nik Reiman
    I'm writing a C++ plugin in Mac OS X using the Carbon framework (yeah, yeah, I know, Apple is deprecating Carbon, but at the moment I can't migrate this code to Cocoa). My plugin gets loaded by a master application, and I need to get a CFBundleRef reference to my plugin so that I can access it's resources. The problem is, when I call CFBundleGetMainBundle() during my plugin's initialization routines, that returns a reference to the host's bundle reference, not the plugin's. How can I get a reference to my plugin's bundle instead? Note: I would rather not use anything determined at compile-time, including calling CFBundleGetBundleWithIdentifier() with a hard-coded string identifier.

    Read the article

  • Is reference to bug/issue in commit message considered good practice?

    - by Christian P
    I'm working on a project where we have the source control set up to automatically write notes in the bug tracker. We simply write the bug issue ID in the commit message and the commit message is added as a note to the bug tracker. I can see only a few downsides for this practice. If sometime in the future the source code gets separated from the bug tracking software (or the reported bugs/issues are somehow lost). Or when someone is looking in the history of commits but doesn't have access to our bug tracker. My question is if having a bug/issue reference in the commit message is considered good practice? Are there some other downsides?

    Read the article

  • What is the convention for the star location in reference variables?

    - by Brett Ryan
    Have been learning Objective-C and different books and examples use differing conventions for the location of the star (*) when naming reference variables. MyType* x; MyType *y; MyType*z; // this also works Personally I prefer the first option as it illustrates that x is a "pointer type of MyType". I see the first two used interchangeably, and sometimes in the same code I've seen differing uses of both. I want to know what is the most common convention It's been a very long time since I've programmed in C (15 years) so I can't remember if all variants are legal for C also or if this is Objective-C specific. I'd prefer answers which state why one is better than the other, as how I explained how I read it above.

    Read the article

  • What parameters to use to compare GUI frameworks / toolkits?

    - by gooli
    I'm doing some research on the best GUI toolkit to use for future products at the company. We're talking about a fairly large organizations with quite a bit of code and a complete rewrite project in planning. Don't ask. Anyway, I'm trying to create a list relevant parameters to judge the toolkits. What would you use to drive the comparison? Here's what I've got so far: Maturity Ease of development Ease of prototyping Ease of maintenance Size of hiring pool Available knowledge at the company Training costs Community size Community level of expertise (how hard to find good answers to complex problems) Amount of expert-level books available Ability to interface to other technologies Deployment considerations Visual aesthetics Ability to access OS resources Multiple monitor support (something that might come in handy in our particular application)

    Read the article

  • How to Properly Reference a JavaScript File in an ASP.NET Project?

    - by DaveDev
    Hi Guys I have some pages that reference javascript files. The application exists locally in a Virtual Directory, i.e. http://localhost/MyVirtualDirectory/MyPage.aspx so locally I reference the files as follows: <script src="/MyVirtualDirectory/Scripts/MyScript.js" type="text/javascript"></script> The production setup is different though. The application exists as its own web site in production, so I don't need to include the reference to the virtual directory. The problem with this is that I need to modify every file that contains a javascript reference so it looks like the following: <script src="../Scripts/MyScript.js" type="text/javascript"></script> I've tried referencing the files this way in my local setup but it doesn't work. Am I going about this completely wrong? Can somebody tell me what I need to do? Thanks

    Read the article

  • Sql inline query with parameters. Parameter is not read when the query is executed.

    - by fzshah76
    Hi All: I am having a problem with my sql query in c#, basically it's inline query with parameters, but when I run it it tells me that parameter 1 or parameter 2 is not there here is my query declared on top of the page as public: public const string InsertStmtUsersTable = "insert into Users (username, password, email, userTypeID, memberID, CM7Register) " + "Values(@username, @password, @email, @userTypeID, @memberID,@CM7Register ); select @@identity"; this is my code for assigning the parameters, I know I am having problem so I am assigning the params twice: Username =(cmd.Parameters["@username"].Value = row["username"].ToString()) as string; cmd.Parameters["@username"].Value = row["username"].ToString(); In 1 methopd it calls this query and tries to insert to table, here is the code: Result = Convert.ToInt32(SqlHelper.ExecuteScalar(con, CommandType.Text,InsertStmtUsersTable)); Exact error message is: Must declare the variable '@username'.

    Read the article

  • Will removing unused query string parameters negatively affect SEO?

    - by trm
    Will changing links to remove query string parameters that are no longer used have any negative impact on search engine rankings? Say I have a page about.php on my site, and all of my links to this page are of the form http://www.example.com/about.php?foo=bar and I've made some changes to the script such that the parameter foo is no longer used. I would like to remove the unused parameter from the links so the URL will look cleaner, but I am concerned that this could cause problems with SEO. Is it safe to remove ?foo=bar from my links?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >