Jump to content


Popular Content

Showing most liked content since 08/20/2017 in all areas

  1. 5 points
    Hi @Gary@ADL We have just finished some work which will provide a request list on each user's profile and Contact's record. Also accessible will be the list of services that they are subscribed to. This will provide easy access to view, manage, or export a list of requests that is associated to an individual customer (user or contact). This list is only visible to support staff and users of Service Manager. This will not be visible to the customer when viewing their own profile. As you can see, it is a fully operational request list similar to the main request list with filters, exports, and column configuration. This is being prepared for the next Service Manager update and should be available in the live environment over the next couple of weeks. Regards, James
  2. 2 points
    Thanks @TrevorKillick! And I agree with "our developers worked some magic" ! They did a great job at it
  3. 2 points
    This is now available as of the Service Manager 1045 update. The following criteria have been included in the View configuration Request Source Sub-status Catalog Member Resolve By Respond By Regards, James
  4. 2 points
    Hi @Martyn Houghton, I've made some changes to the cleaner tool, and the supporting code in Service Manager, so that once it's released you will be able to filter requests to be deleted by: Associated Service IDs (multiples of) Request Statuses (multiples of) Request Types (multiples of) Requests logged after a specific date & time Requests logged before a specific date & time So an example of how your configuration might look with this new version: As mentioned, I've had to make some changes within Service Manager to support the changes to the cleaner utility, so once these Service Manager changes have made it to live (they will be in the next update), then I'll release the new version of the utility to our public Github, and post back here to let you know. Cheers, Steve
  5. 2 points
    Hi @chrisnutt Ok thanks for the explanation. This is a very complex query to formulate so unless you know a little SQL is a difficult one to quickly establish. Its just taken me quite some time, but I THINK I have managed to get something that may be suitable for you. Firstly this is performed in Advanced Analytics - and you will be creating a "List of Counters" Widget: The following screenshot shows the finished article, but to get to this when you create it, you need to add some text and an icon, and then click the edit button: This gives you the screen to enter the data for the widget. Try and enter the following based on the appropriate locations in the screenshot below 1 - Avg 2 - (TIMESTAMPDIFF(second,h_datelogged,NextDate)) 3 - (SELECT h_pk_reference, h_ownerid, h_datelogged, (SELECT Min(h_datelogged) FROM h_itsm_requests T2 WHERE T2.h_datelogged > T1.h_datelogged AND h_fk_priorityname= 'Immediate' AND h_requesttype = 'Incident' ) AS NextDate FROM h_itsm_requests T1 WHERE h_fk_priorityname= 'Immediate' AND h_requesttype = 'Incident' ) AS AllDays The text: h_fk_priorityname= 'Immediate' and h_requesttype = 'Incident' which is mentioned twice in number 3 above, is specific criteria for your organisation relating to the tickets you want to compare the dates between - above you mentioned this is all incidents of a particular priority (Immediate) so I have included that in the example...but you may want to add to/amend these criteria I hope this helps. Let me know if you run into any difficulties (I ran into plenty during the investigation of this! Kind Regards Bob
  6. 2 points
    Hi Mark, Yes, we are aware of the issue, and it has already been fixed in the next Service Manager update, which is aiming to be out later this week. Thanks, Ryan
  7. 1 point
    We are setting up a 'new' IT support team which is going to support one specific IT service. We currently have about 30 services and 13 support teams. Currently, ALL of our services are supported by ALL of our teams. However, this new team will only be supporting ONE service. As far as I can see, to get this set up so that the new team doesn't 'see' the services it won't be supporting, I will need to go through all 30 services and add each team (except the new team) as Supporting Teams individually. My maths isn't great but I think that adds up to a lot of clicking.....is there a way of adding Supporting Teams by exception (as in, all teams EXCEPT a particular one will be supporting that service)? Or a quick way of adding all the teams so that I just have to remove one for each service? thanks
  8. 1 point
    Hi @SJEaton Sorry I'm a bit late to the discussion but I just wanted to contribute, and potentially take a step back. Firstly - I would advise to ALSO configure any custom fields from Service and not directly from a request. Doing it from a request simply makes thing more confusing, and you have more control when using the "View Details" for on the Service. If we were to create a brand new Service, as mentioned above, we have the option to use up to 17 custom fields (Custom A to Custom Q). This is also per Request Type - so you can have 17 custom fields for Incidents against your Service, and 17 for Service Requests against your Service. In my screenshot below, I've configured them all in the view details form: I've also made all of this visible even if no value exists. And as you can see, when I raise an Incident, this is what I see: You can have multiple Catalog Items with Progressive Captures against a Service - and as you have done, you can map the responses to the catalog items. But regardless of how many you Catalog Items/Progressive Captures you have, you still only have the 17 Custom Fields against that Service to play with. So for Example - in my Service "Bobs Service 10" I have two catalog items: For my "New Starter" Catalog Item, I have created a Progressive Capture and have 16 questions, all about the new starter e.g. "What is the Starter Name?" "What is the Starter Date?" "What is the Starter Manager?" . . "What is the Starter Email?" In theory I can use 16 of my custom fields against that Service, each with a unique label, to map all the answers to - if I need to. So: "Starter Name" --> (Maps to h_custom_a "Starter Name") "Starter Date" --> (Maps to h_custom_b "Starter Date") "Starter Manager" --> (Maps to h_custom_c "Starter Manager") . . "Starter Email" --> (Maps to h_custom_p "Starter Email") Now I come to my Leaver Catalog Item and Progressive Capture. Because this falls under the SAME service, if I want to map to any custom fields with a unique label, I only have 1 left - h_custom_q. If I have understood correctly, you are in a scenario where you have multiple customer questions in a number of Progressive Captures and you are trying to capture them on the request details. The bad news is, that as per above you are limited to the 17 per service. So how do you resolve this? You have a few options: 1) Split out the Service - In my example above, if I really need to capture those questions in the request details, I would create a new service that is specific to a New Starter. Each Service gives you 17 Custom Fields per type. 2) Consider why you need the Answers as Custom Attributes - All Progressive Capture answers are captured within the questions section anyway. The main reason that people want to map answers to Custom Fields is so that they can be edited in the future (e.g. a Change Implementation Plan). But if the data is unlikely to change, (e.g. the Name of a New Starter) do you really require that answer to be mapped? 3) User Shared Labels - A bit more tricky and will involve some careful consideration - but are there any custom attributes that can be shared across you Catalog Items? In my example above, perhaps instead of "Starter Name" and "Leave Name" as the label for h_custom_a, I would just have "Name" so it can be shared. Apologies if this is going over old ground or I have misunderstood any of the original issue that you posted - I also appreciate this is a bit harder to do when you have already set up your Service Catalog in a particular way as you need to rework it rather than begin at the Service/Custom Fields level as I have done above. But please let me know if you have a specific issue or problem to overcome and perhaps we can assist a more granular capacity. Kind Regards Bob
  9. 1 point
    Thanks Trevor, This has now worked with 1 worker. It took a long time, but it isn't a problem as this runs about 5.30 every evening so we will stick to using 1 worker. Cheers, Samuel
  10. 1 point
    Hi Samuel, This does make sense. I'll put this through to our development team to have a look at. Regards, James
  11. 1 point
    When using Request List Views and sharing them to your team/colleagues, it would be useful for conditions such as 'Owner' where you can insert a dynamic variable of the current analyst id rather than a literal value. This could also apply to other analyst id related fields such as Closed By, Created By, Resolved By etc. This way as a manager I can maintain a common set of views I share to my team but they will 'personalise' automatically to the analyst who is running them without them needing to create their own copy. Cheers Martyn
  12. 1 point
    Exactly Why do ITSM Vendors Lead with ITIL? I was inspired to write this article on the back of a question asked on the Back2ITSM community by William Goddard which was... I think the answer to the question is obvious, but we can explore it by looking at the role of a vendor in a niche industry.  Firstly, and most obviously I think, vendors do not choose to lead with ITIL, Pink Verify or anything else. The buying public chooses, and vendors simply make and sell what they are asked for.  The problem with niche markets like the ITSM space is there are different parties, with different agendas, and for the most part they are in conflict with each other. The Customer Organisation – needs improved efficiency and better ROI on its investments. They don’t care how it is done, and often don’t know what they need to do either. The command from above is ‘get it done’ and they want demonstrable results, measured essentially in reduced costs/increased business value. The Buying Customer – for the sake of this example, is the IT department and/or the people directly responsible for running IT Service within the organisation.  They are under pressure to succeed by showing business value, with a backdrop of serious completion from consumerisation of IT, BYOD and cloud providers.  They’re following an IT strategy, which often doesn’t dovetail with a business strategy. They don’t really know what to do and things move so quickly they are looking for help and guidance, so often tune into the next ‘silver bullet’ that has traction and early success. The ITSM Influencers – the people who guide the industry; experts, authors, pundits, bloggers, consultants, analysts, training and certification organisations…independent trusted advisors. The Vendors – the people who have deliver the tools that balance the needs and wants of the customer with ever-changing requirements, to deliver efficiency and lasting value that justifies the significant expense of their ITSM investments. With the definitions out of the way, let me explain some of the behaviours I’ve witnessed, and forgive me if I hit a nerve or two along the way.  Let’s start with the Organisations.  They are absolutely right - IT is expensive, often inefficient, and more often than not, struggles to demonstrate business value. Over the last 15 years, whilst ITIL has enjoyed prime-time, technology has changed radically, and the security that surrounds it is placing a larger burden on IT. Don’t get me wrong, security and privacy concerns should be taking centre stage, but there’s a cost, and the greater the demand for better protection, the higher that cost will be.   Security teams now carry more weight than any other IT group, and that’s the biggest change that I’ve observed in the last 20 years.   Once you are past the organisational governance and procurement, let us talk about The Buying Customer. Customers ask for ITIL, so vendors create solutions around it, and many lead with it.  Vendors are in the business of selling products, so market forces of supply and demand are what apply here, and there’s nothing wrong with that.  If customers consistently asked for a service desk tool that included a IoT coffee maker, trust me, vendors will start to provide it.  If we accept this notion, then we have the answer to the question “Why Does a Vendor Lead with ITIL”.  Perhaps a more interesting question is “Why DO Vendors Lead with ITIL?” The ITSM Influencers – If buying customers need help, and if influencers in the ITSM community say, “you need to be doing ITIL”, then customers will ask vendors for ITIL?  It’s somewhat ironic then, when influencers berate vendors for leading with this.  It should be remembered that Influencers have a commercial agenda too.  It amuses me when industry pundits say “Vendors should sell solutions to problems and not sell product features.” The implication being “vendors just want to sell products, so shouldn’t be trusted. Instead, you should listen to us, and buy some consulting, education, certification, or get our help during your product selection process, because we’re independent and can be trusted.” If I sound cynical, perhaps I am, but I’m just pointing out that it’s not only vendors that have products and services to sell. Influencers work with vendors too, because vendors have sustainable revenue sources and are often “less good” at talking the talk. Just pick your favourite expert or industry pundit and google them - the odds are good you will find a video, blog or white paper content written by them for a vendor. On to the Vendors then – it is true, vendors are in the business of selling products/licenses/subscriptions.  I make no bones about it, because that’s what vendors do. It’s usually honest and transparent – money for software that delivers productivity.  But the assertion that a vendor is not interested in helping customers succeed, is nonsense. With a SaaS, pay-as-you-go business model, that viewpoint is ridiculous.   I can’t speak for other vendors, but our motivation is to help customers be successful. Our efforts are often hampered by complex procurement, regulatory controls and 200 page RFI/RFP documents that make it as difficult as possible for vendors to comply, meet requirements, and also deliver real value. Isn’t it time for influencers and the community at large to stop referring to vendors as the “Dark Side” to justify “independent” services prior to vendor selection. To simply trade and exist, vendors have to: Make products to meet requirements that customers cannot fully quantify Navigate regulatory and governance requirements in a landscape that’s constantly changing  Deliver consulting, training and education to customers - free of charge - during sales cycles, pre-sales, pilot projects Keep up with the latest “shiny things” because customers continuously ask for them. Answer the same questions, in the same RFPs – yes that happens…often – and submit a response that’s contractually binding. Differentiate with products/features against ‘unknown’ competition.  As a side note, in almost all cases, when a vendor is in a competitive situation, and the customer will not disclose who we are competing against, we can generally guess. By the second round of demos, we’re asked for the “shiny thing” that was in another product – so we usually know who we’re up against Take the blame.  Despite the buying process, independent consultants, implementation process or the day-to-day management of the solution, if it fails, the product is blamed. Everyone else washes their hands of it and moves on to the next project. Long after the ITIL foundation training is done, when the consultant is gone, and the people who implemented your solution have moved on, as a vendor, we will still be there, supporting you, and doing what we can to help you succeed. I rarely see an RFP that spells out the business problems that need to be solved. More often than not, it’s a shopping list of features/functionality, often derived from the bits people liked about their existing solution, topped with generic ITSM requirements based on a commonly used template. If customers would just explain the business problems they’re trying to solve, vendors would be in a better position to help.      Vendors sell what customers ask for. Customers ask for the latest silver bullets that the industry pundits are promoting. Customers are told that vendors have an agenda and only want to sell their products, you need independent advice…and round and round we go… The Hornbill Promotion Bit: I am proud to say as a vendor we do not lead with ITIL. We have to fit within the ITIL box, but we will never allow innovation to be stifled by ITIL dogma.  We lead with technology innovations that improve the way our customers work. We listen to concepts and blue sky thinking, but we base our products on practical, tangible things you can touch, see and use every day. With pay-as-you-go, no contractual tie-in arrangements, the balance of power has shifted to the customer.  Vendors want customers to succeed, quite simply because their revenue and long-term sustainability depends on your continued success. In the age of on-premise software, with large up-front costs and long term contracts, the vendor had the edge, and customers had to “sweat the asset” and “justify the spend”. Today, if the vendor doesn’t deliver value, customers can walk away.  If you’re a SaaS customer, and you need help, just reach out to your vendor, I guarantee they’ll be highly motivated to do everything they can to support you.
  13. 1 point
    Using WebHooks for Integration Web Hooks are a great way to integrate Hornbill with other applications. Web Hooks can send information to a HTTP Endpoint as soon as a record is created or updated, rather than relying on scheduled imports or the continually polling for data. A web hook is the opposite to an API call, a web hook is a call over HTTP from your Hornbill instance to a web endpoint of your choosing. Most application actions on a Hornbill instance can trigger an action-specific event when an action is performed. Hornbill can be configured to call to a web end point passing the action-specific data to the web service being invoked. This is a very powerful mechanism that enables true, near real-time integration with other business systems.    
  14. 1 point
    @Paul Alexander Based on our communication in the support call, it was a case of the calalog item being inadvertently created without a progressive capture. We have put in place some code to prevent catalog items from being saved without a progressive capture so it won't be happening again. Thanks Pamela
  15. 1 point
    Hi @Lyonel This is controlled in the admin portal under Home > System> Settings> Advanced and the setting is notification.excludeActions. Anything ticked under this setting will not be available on the user profile notification settings. Hope this helps. Kind regards Conor
  16. 1 point
    Hi @Paul Alexander Apologies I completely missed this. There was a new release of the LDAP user import script released yesterday which may help you with this too, all details and necessary files can be downloaded from here - https://github.com/hornbill/goLDAPUserImport Ok so there are 2 methods I would use to import multiple AD groups - either to match the names of the groups in AD to the names of the organisations in Hornbill, or to split the imports up and hardcode the group names. The first method is the easiest but requires your AD data to be up to date and spelt correctly. In AD each customer will need an attribute that specifies which group they are in i.e. finance in the Department field. If the organisation in Hornbill is also Finance then the orglookup function in the import script will simply match the AD string with the Hornbill group and that customer will then be automatically linked to the Finance department. Providing all the organisations in Hornbill match the all values in the department string in AD then all customers will then be linked automatically to their relevant groups. The latest import script will automatically remove previous associations if this changes in the source (AD), and add the customer or user to the new group as specified in AD if necessary. The second method involves creating multiple import scripts, and rather than using a variable for the orglookup i.e. department, it will use a hardcoded value. Any value in the mapping that has square brackets [variable] will use the variable from AD, and any value that has Quotation marks "hard coded" will be the string that goes in for every customer imported on that script. So this method will be more exact, but there may be more import scripts to manage. To set this up I would use the filter at the top of the import script to only select users from a particular group in AD, or use the DSN search root that will only select users from a particular group. Either way you will only be looking at a subset of customers in your AD per import script. In that import script then hard code the group that every customer in that script will be part of by putting the group name (that will need to match the group name in Hornbill) in quotation marks in the Atrribute bit of the orglookup function at the end of the import script. This will mean that every customer that is imported in that script will then be a part of that organisation. You can have as many scripts running each hour/day/week as you want, but remember that each user will have their group set on the last import that runs - so if I was in 2 different import scripts for whatever reason, then I would be a part of the group specified in the last import script that runs with me in it. In theory all customers should be imported in different scripts, so this shouldn't be an issue, but if you do need multiple group associations then the latest import script can cater for this with the "OnlyOneGroupAssignment" function in the orglookup bit. Either method will enable you to import multiple groups, but it sounds like you will need the second method so you can definitively put that group of AD users into this Hornbill group. You can also filter the source down to one user record (using the filter or the DSN at the top of the script) to test how it works first before rolling it out to multiple users and groups, but it is straight forward once you have set it up once because you can then use that tested script as a template and tweak the filter/DSN and the hardcoded group names. I hope this helps, lots of detail but this will give you the outcome you are looking for (and anyone else with similar requirements). Kind regards Conor
  17. 1 point
    Hi @Graham1989 The behaviour you are experiencing has already been changed in the next update to default to 'All Sites' when there are no 'Customer Sites' available. In fact the 'Customer Sites' tab will be removed from view as it is not required if the customer does not have any sites. Regards, Alex
  18. 1 point
    Hi @samwoo, We've identified the problem and made a fix. If we don't find any further issues, you will have the fix tomorrow in live. Thank you for your help on this and sorry for the trouble. Daniel.
  19. 1 point
    @TrevorKillick That did the the trick with the "URI": "[thumbnailPhoto]", we now have pictures again. Might we worth updating the example section in the download as well. Thanks. Martyn
  20. 1 point
    @Martyn Houghton Can you change "URI": "thumbnailPhoto" to "URI": "[thumbnailPhoto]" and try again please? This was a change made recently to keep field mapping unique across the configuration variables. Also make sure thumbnailPhoto is listed in the attributes section of the configuration. Kind Regards Trevor Killick
  21. 1 point
    Hi @Martyn Houghton, I had already queried this before and I have no idea where I got my response from. This is currently possible to do: https://wiki.hornbill.com/index.php/LDAP_User_Import Do a search for "ImageLink" and the details are there. You will need to upgrade your LDAP_Import tool and Conf files if you havent done so for a while. The URI we use is "thumbnailPhoto" which also has to be defined under LDAPAttributes. The example provided on the Github link below is basically what we use Github Link: https://github.com/hornbill/goLDAPUserImport Hope that helps, Samuel
  22. 1 point
    @Martyn Houghton and @nasimg We have just finished a minor change which will allow you to set a from address when sending an email from the BPM. This will be available in an update to Service Manager over the next couple of weeks. This includes the following BPM Operations Entity/Requests/Email notification/Email Contact Entity/Requests/Email notification/Email Co-worker Entity/Requests/Email notification/Email Customer Entity/Requests/Email notification/Email Customer's Manager Entity/Requests/Email notification/Email External Address Entity/Requests/Email notification/Email Request Owner
  23. 1 point
    Hi @shamaila.yousaf Thanks for the suggestion. We have had a look at this and we think its a nice idea we are adding it to our backlog but its not presently in our 90-day development window, we will post here as soon as we schedule something around it. Gerry
  24. 1 point
    No problem @Paul Alexander, Glad is working, or at least that part. Enjoy home ;-) Daniel.
  25. 1 point
    Hi @Paul Alexander, @Martyn Houghton, It is possible. You place the bold markup outside the link, like this: '''[[http://www.hornbill.com|Hornbill ]]''' That works fine. How were you trying to do it? Regards, Daniel.
  26. 1 point
    @Martyn Houghton It does now Kind Regards Trevor Killick
  27. 1 point
    @Pamela yes update went fine this time , multipe request selection is working now as well . Thanks Ralf
  28. 1 point
    Don't worry, I think it's rectified itself. Please ignore
  29. 1 point
    That's great news @shamaila.yousaf
  30. 1 point
    thanks @David Hall you were correct, we had it set tu update request, ive now changed this to logorupdateincident, so hopefully this sorts it
  31. 1 point
    Hi Martyn, Sorry, but I seem to have overlooked this post from July. This is a good idea and something that we would like to introduce. As you had previously used Supportworks you will be familiar with the operator scripts that could be presented based on category selection. This isn't currently scheduled but we will continue to review and get this progressed at some point. Regards, James
  32. 1 point
  33. 1 point
    Hi @DeadMeatGF and @James Ainsworth, Many thanks, I did let the Integrations Officer know yesterday and he was impressed by how much can be done via the Webhooks. Many thanks for your help and assistance! Thanks, Samuel
  34. 1 point
    Hi @samwoo As DeadMeatGF suggests you should be able to use the Asset Update for this. This will create a web call on all updates, so your end point might need a way to filter it down to just the criteria that you are looking for such as when it is assigned. This is done in Administration under System->Settings->Webhooks. Regards, James
  35. 1 point
    Hi Shamaila, Thanks for your post. That is an interesting idea. We do have some scheduled work that provides a plug-in to Document Manager on a request that will help manage a link between select documents on a particular request. Maybe looking at some automation might be a next step. One of the challenges with emails and their attachments is when there are logos and other images that have been used in someone's signature. I'm not sure how we would be able to control these from being added to Document Manager. Maybe this automation should only be available for doc, pdf, xls, type files and not include images. Regards, James
  36. 1 point
    Hi @Martyn Houghton Apologies for the delay in response, was about to post back this afternoon, I have replicated the issue locally and I'm working on a solution right now. When I have it completed I'll let you know. Regards, Dave.
  37. 1 point
    @TrevorKillick things are looking much better now. It seems to be stable, I did not get any error messages for the last 30 minutes or so now
  38. 1 point
    Hi @Paul Trenter @Martyn Houghton is exactly right - the from address does not match the name of the shared mailbox in question. I have just gone through and tested it myself and it seems that the problem is not with the config but with the test email address being hardcoded as 'do-not-reply@live.hornbill.com'. Firstly I have also gone through and tested the email configuration within the application and that does work correctly, you will be able to continue sending emails out of the system, it is just that the testing SMTP section has the from address hardcoded, which needs to be changed by us. So you can continue working because your mailbox will send emails out, and thank you for pointing this out this will be raised and fixed asap. Thanks Conor
  39. 1 point
    @Paul Trenter Does you Office 365 login id match the email address that you are attempting to send from, i.e. the default email address from the shared mailbox in question? Cheers Martyn
  40. 1 point
    Hi @Michael Sharp @nasimg @samwoo @Martyn Houghton I am pleased to inform you that an enhancement request has been raised for this requirement and I have registered your interest. We will let you know as soon as this is moved into the development queue. Thanks, Ehsan
  41. 1 point
    @David Hall @James Ainsworth Had not realised that you had to add a rule at the Service Level to select the Service Level Agreement first. I had added the organisation condition in the Service Level Agreement rules, not the Service. Now I added rules to select the Service Level Agreement into the Service, it is now selecting the correct Agreement and Level as required. I am presuming that I would no longer required the organisation condition at the Agreement level rules. Cheers Martyn
  42. 1 point
    (shhh I'm still on holiday :P) +1 my colleagues have complained about this before, why put a reason when sometimes the sub-status are reason enough. an option to make the reason mandatory or not as well as hidden or not would be great. also +1 regarding the visibility option too.
  43. 1 point
    @Daniel Dekel thanks for the swift reply, and I'm glad it can be put in the next build. This will make it a lot easier for us to adopt timesheet mgr. Darren
  44. 1 point
    Hi @Darren Rose, We found a problem in the email with Timesheet. The fix will be shipped in our next build. Will probably take a week to see this fix. Sorry for the trouble, Daniel.
  45. 1 point
    Hi @nasimg I've done some tests and I can see that a post from a customer on the portals changes the colour of a request in the request list, however as you suggest a comment does not. I will ask the developers to take a look and provide some feedback. James
  46. 1 point
    Please can it be considered to add the snippets section onto the resolve call section, (and also the ability to attach files would be nice), thanks Gary
  47. 1 point
    Hi @Paul Alexander This change is now in a queue for our development team. I'll will update the post once we are nearer to having it ready. Regards, James
  48. 1 point
    Hi Samuel, I just wanted to let you know we have completed some work to include the members as an option in the conditions for the Views. This will be available over the next week or two. Keep an eye open for the release notes. Regards, James
  49. 1 point
    @SJEaton sorry for the delayed reply, I am a bit late here ... but yes, it would be the same node logic for all other custom fields storing dates. For reference, in case someone else finds this thread and/or is interested in teh solution, this is how it was done (date format changed) via the BP using iBridge integration nodes: First, have an "Integration Call" node to convert the date value stored in a custom field (let's say Custom A): Second, have an "Update Request" node, for custom fields, which updates the same custom field (in our example Custom A) with the converted value (which is the response parameter from the Integration Call node): You should now have the converted value in your custom field!
  50. 1 point
    @Alex8000 Being able to set the default at a service level would at least help to in the short term. Cheers Martyn